ORB principle and feature matching source code

 

 

  This later, to temporarily more ORB, SITF, SURF three properties of the operator, in the code section, will introduce OPENCV KeyPoint feature matching feature points in the manual carefully matching feature descriptor and the like Match operator configuration.

First, the background:

  At present feature matching operator is mainly used in the target track, a plurality of image matching, the effect is relatively good SIFT, SURF, ORB other feature matching operator, SURF obtained by the improvement in SIFT. There are currently more ORB, SIFT and SURF three features matching operator. Here, a brief introduction in three operators, the ORB uses FAST feature detection, and with respect to the SIFT SURF has the advantage of faster running speed, the OBR algorithm introduced later than SIFT and SURF algorithm, running speed than SIFT and SURF ( internet can be found to speed gap between the three, will not show), mainly used in real-time image matching. SURF SIFT and has better stability, improved version of SIFT SURF, and speed matching was better than SIFT. (In this introduction SIFT, which is the hope that readers understand the algorithm, SIFT feature matching on the initial, relatively good results. At the same time hope, if readers are interested in, we can continue to improve SIFT, and then developed a case SXXXX, published papers, filed patents it ??? hhhhh).

Two, ORB feature matching principle:

  Feature matching step can generally be divided into three steps: 1 detects feature points, 2 points calculated feature descriptor, 3 matching feature points feature point descriptors according to...

ORB feature point detection:

  1.ORB the feature point detection section, using FAST feature detection algorithm. The following is a schematic FAST feature detection points (and similarities LBP):

    FAST basic principle is determined by comparing the pixel value of a pixel point of the pixel values ​​of its surrounding park, determined by the number of size, it is determined whether a corner point. The following is the principle and procedure:

      (1) A point p, assuming that the pixel value L;

      (2) of radius r, to obtain m points around the point p, r is the default 3, m is a display 16. FIG.

      (3) set the threshold t, at 16 points around, if there consecutive N number of pixels is greater than or less than l + t, then the point is a corner point (the N may be 12 or 9, fellow experiment 9 demonstrate the effect may be better).

 

              

    The above is the basic FAST feature point detection step, since the image may be 64 * 64 or 128 * 128 or 1440 * 1080, etc., as described above, if such a processing for each pixel, requires a lot of calculations. One FAST the efficiency by excluding the non-corner point algorithm: 1,9,5 and 13 pixels around the position of only the four detection pixels (first 1, after 9, after 5, 13), if P is a corner point, should be at least three greater than or less than I + t. If they meet, it is further determined according to the above steps; not, directly pass.

    FAST feature point matching the above there are still some drawbacks, such as when n is less than 9, can not rule out the corners of the FAST algorithm efficiency, the detected corner angle is not the optimum image point in the corner part may be dense like . For the above disadvantage, FAST can do to improve a machine learning classifier corner, but did not use in the ORB, this does not describe. If you are interested, be the venue https://www.cnblogs.com/ronny/p/4078710.html.

   OBR within FAST implementation:

      FAST author using the feature point extraction algorithm, the algorithm experimentally derived FAST-9 have a better effect (FAST-9 consecutive points to 9, 12 to 12 FAST-consecutive points), since the edge positions of FAST algorithm obtained feature has a great influence, the N number of key points in accordance with the ranking Harris corner detector, before extracting the m key, to further improve the quality of the acquired corner points (m accordance with the parameter). The detected corner points still have size invariance, author take scale image pyramid Law Act, FAST feature detection for each Harris corner detector and the improvement of the pyramid is eligible to take corners.

      Image pyramid:

        Expression of the image pyramid is of a multi-scale image, each layer of the pyramid images of different scales. , An enlarged image resolution sequentially from top to bottom according to a certain proportion. (Wherein an interpolation algorithm can be enlarged, filtering, etc.). The following is a pyramid representation:

                    

 

 

     

  2.ORB descriptor Calculated:

    Descriptor is defined: After obtaining the feature points subject to calculate the feature point descriptors, i.e., feature points feature, after the step of matching feature provides a method.

    Descriptor attributes: Attribute Description of corner sub image representation for feature matching, outstanding invariant descriptor algorithm should have different distances from the (understood as the size of the image), the angle of illumination conditions, i.e., descriptors may reproducibility.

    Feature descriptor calculation section, ORB using BRIEF algorithm to calculate a feature point descriptor, the following description BRIEF algorithm (the reader may now be very surprised to detect feature points from descriptor calculation, ORB are using existing algorithms, this, Bo main ,,, but also very helpless people is patented, and is now more popular in real-time ORB feature matching results):

    BRIEF algorithm consists of one in 2010 : "BRIEF Binary Robust work of the Independent Elementary Features" put forward, is a binary encoded descriptors (LBP and LBP features of the algorithm a little acquaintance), the traditional method described in the area histogram feature point give up, speed up establishing the speed of feature descriptors, matching time reduction features.

    BRIEF simplified algorithm can be divided into two steps:

      1. The feature point as the center, taking the field of SxS window on the window by a certain rule selecting two (a pair of) the point p (x) and p (y), comparing the size of two pixels, the following assignment:

      

                  

 

 

        2. In the window by a certain rule selection to the point N, the step of repeating the assignment method of forming a binary coding (typically N 256), p (x) and p (y) obtained by sequentially selecting point, randomness .

        A certain rule: author tested the following Method 5, wherein the process (2) is preferably:

 

         

 

 

         BRIEF improvements to the ORB:

        ORB对BRIEF可分为steer  BRIEF和rBRIEF两部分,下面分别做介绍。首先,通过上述FAST检测出的特征点,仍不具备旋转不变形,ORB通过灰度质心方法确定特征点的主方向,并对其进行改进,以下为旋转不变形部分的算法原理。

          旋转不变形:          

            ORB内采用灰度质心法为每个特征点计算主方向。灰度质心法定义:

                   

 

            其中I(x,y)为点(x,y)出的灰度。如下图,为该特征点的质心,其中m10=x1I(x,y),m01=y1I(x,y)

 

                 

 

            特征点的方向角度定义:θ=atan2(m01,m10)   atan2为与象限相关的arctan函数

 

         

 

    steered BRIEF(旋转不变性):

      假设原始的BRIEF算法在特征点SxS领域内选取n队点,经过角度θ旋转,得到新的点对:

                    

 

 

         其中Dθ为旋转后的点对,Rθ为旋转矩阵。在新的点集位置上比较点对的大小形成二进制串的描述符。

 

     rBRIEF的改进:

        BRIEF描述子的点对是随机生成的,不同的随机序列带来的效果是不一致的。为了找出更优的描述量,我们从大数据集中提取了300k个特征点,对每个特征点使用M个不同的随机点对序列组成不同的向量,并合成一个300k×M300k×M的矩阵,矩阵每一列代表使用点对ppippi取得的二进制数。将这些列向量按照他们的variance从大到小排列。然后使用贪心算法去掉相关度过高的列向量,取得最优的256个向量对应的点对位置。通过这些位置得到的描述量称为rBRIEF描述量。同时,ORB对rBRIEF固定位置的点对进行旋转θ即:

                    

 

 

 

 

      最后,进行匹配。在OPENCV中,ORB、SIFT和SURF的匹配算法均是通过一套封装的算法完成的,在下文进行OPENCV中KeyPoint、匹配算法和匹配算子的介绍(主要根据OPENCV内的代码进行介绍)。

三、代码部分:

    首先介绍匹配部分DescriptorMatcher:

 

OPENCV特征匹配部分通过DescriptorMatcher实现,DescriptorMatcher是匹配器的抽象基类,其具体类有:
        class FlannBasedMatcher
        class BFMatcher
      源代码暂时未找到,不再介绍其内部如何构造,在此仅介绍其使用方法,匹配器的创建有两种方式(建议用1,方便,接口多):
      一:
          static Ptr<DescriptorMatcher> create(const string& descriptorMatcherType)  
          如:   cv::Ptr<cv::DescriptorMatcher> matcher = cv::DescriptorMatcher::create("FlannBased");
      二:
          DescriptorMatcher  *pMatcher = newBFMatcher;
       DescriptorMatcher的接口函数有以下几种(对于一般的使用,仅crete()和match()两个函数足可):
        create():创建匹配器,支持的匹配器类型有:BruteForce 、BruteForce、BruteForce-Hamming、BruteForce-Hamming(2)和FlannBased  (博主目前使用FlannBased,效果还可,其它暂未尝试)
        match(desL,desR,match) :通过描述子进行匹配,desL和desR为两个Mat类型的描述子,match为DMatch类型的vector,下面会介绍DMatch类型
        knnMatch();找到最好的k个匹配
        add();添加匹配器,用来连接匹配器集合,若匹配器集合不是空的,新加的匹配器会加到集合的后面。
        getTrainDescriptors();获得匹配器集合
        isMaskSupported();匹配器是否支持掩模,使得话就返回ture
        radiusMatch();找到距离不超过规定的匹配

 

介绍DMatch类型:      

 简化版(针对于不想看下面源代码的人):DMatch包含四个有用信息,queryIdx(要匹配的描述子的索引(序列)),trainIdx(被匹配的描述子索引(序列)),imageIdx(匹配图像的索引)distance表示两个描述子之间的距离,越小匹配效果越好。queryIdx
为match()函数第一个参数描述子对应图像的关键点的索引,trainIdx为第二个参数描述子对应图像的关键点的索引。imageIdx仅在多张图像间进行匹配时,用于表示匹
图像的序列。请注意,queryIdx为要匹配,trainIdx为被匹配。下面有实战展示如何使用。


/*
* DMatch主要用来储存匹配信息的结构体,query是要匹配的描述子,train是被匹配的描述子,在Opencv中进行匹配时 * void DescriptorMatcher::match( const Mat& queryDescriptors, const Mat& trainDescriptors, vector<DMatch>& matches, const Mat& mask ) const * match函数的参数中位置在前面的为query descriptor,后面的是 train descriptor * 例如:query descriptor的数目为20,train descriptor数目为30,则DescriptorMatcher::match后的vector<DMatch>的size为20 * 若反过来,则vector<DMatch>的size为30 */ struct CV_EXPORTS_W_SIMPLE DMatch { //默认构造函数,FLT_MAX是无穷大 //#define FLT_MAX 3.402823466e+38F /* max value */ CV_WRAP DMatch() : queryIdx(-1), trainIdx(-1), imgIdx(-1), distance(FLT_MAX) {} //DMatch构造函数 CV_WRAP DMatch( int _queryIdx, int _trainIdx, float _distance ) : queryIdx(_queryIdx), trainIdx(_trainIdx), imgIdx(-1), distance(_distance) {} //DMatch构造函数 CV_WRAP DMatch( int _queryIdx, int _trainIdx, int _imgIdx, float _distance ) : queryIdx(_queryIdx), trainIdx(_trainIdx), imgIdx(_imgIdx), distance(_distance) {} //queryIdx为query描述子的索引,match函数中前面的那个描述子 CV_PROP_RW int queryIdx; // query descriptor index //trainIdx为train描述子的索引,match函数中后面的那个描述子 CV_PROP_RW int trainIdx; // train descriptor index //imgIdx为进行匹配图像的索引 //例如已知一幅图像的sift描述子,与其他十幅图像的描述子进行匹配,找最相似的图像,则imgIdx此时就有用了。 CV_PROP_RW int imgIdx; // train image index //distance为两个描述子之间的距离 CV_PROP_RW float distance; //DMatch比较运算符重载,比较的是DMatch中的distance,小于为true,否则为false // less is better bool operator<( const DMatch &m ) const { return distance < m.distance; } };

 介绍KeyPoint类型(未找到文本样式,盗几张图):

简化版(懒人通道):KeyPoint主要使用的类型有pt(二维坐标点),size(该关键点领域直径大小),angle(关键点的方向:0~360°),response(关键点的相应强度,可用于排序),octve(关键点的金字塔层数),class_id(当要对图片进行分类时,用class_id对每个关键点进行区分,默认为-1。暂时不使用,使用时你也就已明白它的含义。),

        

 

KeyPointsFilter类型(属于对KeyPoint处理的封装好的类,已在算法内实现。一般仅在自己想写特征匹配算法时,才会用到这些):

      KeyPointsFilter包含5个功能函数:分别为runByImageBorder()、runByKeyPointSzie()、runByPixelsMask()、removeDuplicated()、retainBest()。其源代码结构如下:        

 

ORB API:

分为三部分:
       cv::Ptr<cv::ORB> orb = cv::ORB::create(50); //创建orb对象,后为得到的特征点数目
    
       std::vector<cv::KeyPoint> keyPointL, keyPointR; //创建关键点对象

    
        orb->detect(imageR, keyPointR);        //关键点检测
        orb->detect(imageL, keyPointL);        //关键点检测
         cv::Mat despL,despR;//创建描述子
        orb->compute(imageR, cv::Mat(),keyPointR,despL);//cv::Mat()表示无roi区域  计算描述子
        orb->detect(imageL, cv::Mat(), keyPointL,despR);  //cv::Mat()表示无roi区域   计算描述子
    
    orb->detectAndCompute(imageL,cv::Mat(),KeyPointL,despL);  //进行检测和计算,相当于上面detect和compute两步
    orb->detectAndCompute(imageR,cv::Mat(),KeyPointR,despR); std::vector<cv::DMatch> matches; cv::Ptr<cv::DescriptorMatcher> matcher=cv::DescriptorMatcher::create("FlannBased"); //进行匹配 matcher->match(despL, despR, matches); std::vector point_l,std::vector point_r; for(int i=0;i<matches.size();i++) //得到匹配点 { point_l.push_back(matches[i].queryIdx); point_r.push_back(matches[i].trainIdx); }

 

实战展示:


#include <opencv2/opencv.hpp>
#include <opencv2/xfeatures2d.hpp>
int main()
{
cv::Mat imageL = cv::imread(".I/like/Study.png"); //图片位置
cv::Mat imageR = cv::imread("./Study/make/me/happy.png");//图片2位置
cv::cvtColor(imageL,imageL,cv::COLOR_BGR2GRAY);
cv::cvtColor(imageR,imageR,cv::COLOR_BGR2GRAY);
cv::resize(imageL,imageL,cv::Size(120,90));
cv::resize(imageR,imageR,cv::Size(120,90));

// ORB
cv::Ptr<cv::ORB> orb = cv::ORB::create(50);

//特征点
std::vector<cv::KeyPoint> keyPointL, keyPointR;
//单独提取特征点
orb->detect(imageL, keyPointL);
orb->detect(imageR, keyPointR);
//画特征点
cv::Mat keyPointImageL;
cv::Mat keyPointImageR;
drawKeypoints(imageL, keyPointL, keyPointImageL, cv::Scalar::all(-1), cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
drawKeypoints(imageR, keyPointR, keyPointImageR, cv::Scalar::all(-1), cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS);

//显示窗口
cv::namedWindow("KeyPoints of imageL");
cv::namedWindow("KeyPoints of imageR");

//显示特征点
cv::imshow("KeyPoints of imageL", keyPointImageL);
cv::imshow("KeyPoints of imageR", keyPointImageR);

//特征点匹配
cv::Mat despL, despR;
//提取特征点并计算特征描述子
orb->detectAndCompute(imageL, cv::Mat(), keyPointL, despL);
orb->detectAndCompute(imageR, cv::Mat(), keyPointR, despR);

std::vector<cv::DMatch> matches;

//如果采用flannBased方法 那么 desp通过orb的到的类型不同需要先转换类型
if (despL.type() != CV_32F || despR.type() != CV_32F)
{
despL.convertTo(despL, CV_32F);
despR.convertTo(despR, CV_32F);
}

cv::Ptr<cv::DescriptorMatcher> matcher = cv::DescriptorMatcher::create("FlannBased");
matcher->match(despL, despR, matches);

// //计算特征点距离的最大值
double maxDist = 0;
for (int i = 0; i < despL.rows; i++)
{
double dist = matches[i].distance;
if (dist > maxDist)
maxDist = dist;
}

//挑选好的匹配点
std::vector< cv::DMatch > good_matches;
int size = matches.size();
for (int i = 0; i < despL.rows; i++)
{
int low = 0;
for(int n=0;n<despL.rows;n++)
{
if(matches[i].distance<matches[n].distance)
{
low+=1;
}

}

if(low>=size-15)
{
good_matches.push_back(matches[i]);
}
}

cv::Mat imageOutput;
cv::drawMatches(imageL, keyPointL, imageR, keyPointR, good_matches, imageOutput);

cv::namedWindow("picture of matching",cv::WINDOW_NORMAL);
cv::imshow("picture of matching", imageOutput);
cv::waitKey(0);
return 0;
}

 

 ORB源代码:

CV_EXPORTS_W ORB :  Feature2D  
{  
:  
      
     { kBytes = 32, HARRIS_SCORE=0, FAST_SCORE=1 };  
  
    CV_WRAP  ORB( nfeatures = 500,  scaleFactor = 1.2f,  nlevels = 8,  edgeThreshold = 31,  
         firstLevel = 0,  WTA_K=2,  scoreType=ORB::HARRIS_SCORE,  patchSize=31 );  
  
      
     descriptorSize() ;     
      
     descriptorType() ;  
  
      
     operator()(InputArray image, InputArray mask, vector<KeyPoint>& keypoints) ;  
  
      
     operator()( InputArray image, InputArray mask, vector<KeyPoint>& keypoints,      
                     OutputArray descriptors,  useProvidedKeypoints= ) ;  
  
    AlgorithmInfo* info() ;  
  
:  
  
     computeImpl(  Mat& image, vector<KeyPoint>& keypoints, Mat& descriptors ) ;  
     detectImpl(  Mat& image, vector<KeyPoint>& keypoints,  Mat& mask=Mat() ) ;  
  
    CV_PROP_RW  nfeatures;  
    CV_PROP_RW  scaleFactor;  
    CV_PROP_RW  nlevels;  
    CV_PROP_RW  edgeThreshold;  
    CV_PROP_RW  firstLevel;  
    CV_PROP_RW  WTA_K;  
    CV_PROP_RW  scoreType;  
    CV_PROP_RW  patchSize;  
};  
/** Compute the ORB features and descriptors on an image 
 * @param keypoints the resulting keypoints 
 * @param do_keypoints if true, the keypoints are computed, otherwise used as an input 
 */ ORB::operator()( InputArray _image, InputArray _mask, vector<KeyPoint>& _keypoints,  
                      OutputArray _descriptors,  useProvidedKeypoints)   
{  
    CV_Assert(patchSize >= 2);  
  
     do_keypoints = !useProvidedKeypoints;  
     do_descriptors = _descriptors.needed();  
  
    ( (!do_keypoints && !do_descriptors) || _image.empty() )  
        ;  
  
      
      HARRIS_BLOCK_SIZE = 9;  
     halfPatchSize = patchSize / 2;.  
     border = std::max(edgeThreshold, std::max(halfPatchSize, HARRIS_BLOCK_SIZE/2))+1;  
  
    Mat image = _image.getMat(), mask = _mask.getMat();  
    ( image.type() != CV_8UC1 )  
        cvtColor(_image, image, CV_BGR2GRAY);  
  
     levelsNum = ->nlevels;  
  
    ( !do_keypoints )     
    {  
          
          
          
          
          
          
          
          
          
        levelsNum = 0;  
        (  i = 0; i < _keypoints.size(); i++ )  
            levelsNum = std::max(levelsNum, std::max(_keypoints[i].octave, 0));  
        levelsNum++;  
    }  
  
      
    vector<Mat> imagePyramid(levelsNum), maskPyramid(levelsNum);  
     ( level = 0; level < levelsNum; ++level)  
    {  
         scale = 1/getScale(level, firstLevel, scaleFactor);    
                 static inline float getScale(int level, int firstLevel, double scaleFactor) 
                   return (float)std::pow(scaleFactor, (double)(level - firstLevel)); 
        */        Size sz(cvRound(image.cols*scale), cvRound(image.rows*scale));  
        Size wholeSize(sz.width + border*2, sz.height + border*2);  
        Mat temp(wholeSize, image.type()), masktemp;  
        imagePyramid[level] = temp(Rect(border, border, sz.width, sz.height));  
        ( !mask.empty() )  
        {  
            masktemp = Mat(wholeSize, mask.type());  
            maskPyramid[level] = masktemp(Rect(border, border, sz.width, sz.height));  
        }  
  
          
        ( level != firstLevel )      
        {  
            ( level < firstLevel )  
            {  
                resize(image, imagePyramid[level], sz, 0, 0, INTER_LINEAR);  
                 (!mask.empty())  
                    resize(mask, maskPyramid[level], sz, 0, 0, INTER_LINEAR);  
            }  
              
            {  
                resize(imagePyramid[level-1], imagePyramid[level], sz, 0, 0, INTER_LINEAR);  
                 (!mask.empty())  
                {  
                    resize(maskPyramid[level-1], maskPyramid[level], sz, 0, 0, INTER_LINEAR);  
                    threshold(maskPyramid[level], maskPyramid[level], 254, 0, THRESH_TOZERO);  
                }  
            }  
  
            copyMakeBorder(imagePyramid[level], temp, border, border, border, border,  
                           BORDER_REFLECT_101+BORDER_ISOLATED);  
             (!mask.empty())  
                copyMakeBorder(maskPyramid[level], masktemp, border, border, border, border,  
                               BORDER_CONSTANT+BORDER_ISOLATED);  
        }  
          
        {  
            copyMakeBorder(image, temp, border, border, border, border,  
                           BORDER_REFLECT_101);  
            ( !mask.empty() )  
                copyMakeBorder(mask, masktemp, border, border, border, border,  
                               BORDER_CONSTANT+BORDER_ISOLATED);  
        }  
    }  
  
      
    vector < vector<KeyPoint> > allKeypoints;  
    ( do_keypoints )  
    {  
          
        computeKeyPoints(imagePyramid, maskPyramid, allKeypoints,    
                         nfeatures, firstLevel, scaleFactor,  
                         edgeThreshold, patchSize, scoreType);  
  
          
                 for (int level = 0; level < n_levels; ++level) 
            vector<KeyPoint>& keypoints = all_keypoints[level]; 
            keypoints.clear(); 
 
 
             keypoint_end = temp.end(); keypoint != keypoint_end; ++keypoint) 
  
    }  
        
    {  
          
        KeyPointsFilter::runByImageBorder(_keypoints, image.size(), edgeThreshold);  
  
          
        allKeypoints.resize(levelsNum);  
         (vector<KeyPoint>::iterator keypoint = _keypoints.begin(),  
             keypointEnd = _keypoints.end(); keypoint != keypointEnd; ++keypoint)  
            allKeypoints[keypoint->octave].push_back(*keypoint);      
  
          
         ( level = 0; level < levelsNum; ++level)     
        {  
             (level == firstLevel)  
                ;  
  
            vector<KeyPoint> & keypoints = allKeypoints[level];  
             scale = 1/getScale(level, firstLevel, scaleFactor);  
             (vector<KeyPoint>::iterator keypoint = keypoints.begin(),  
                 keypointEnd = keypoints.end(); keypoint != keypointEnd; ++keypoint)  
                keypoint->pt *= scale;     
        }  
    }  
  
    Mat descriptors;          
    vector<Point> pattern;  
  
    ( do_descriptors )   
    {  
         nkeypoints = 0;  
         ( level = 0; level < levelsNum; ++level)  
            nkeypoints += ()allKeypoints[level].size();  
        ( nkeypoints == 0 )  
            _descriptors.release();  
          
        {  
            _descriptors.create(nkeypoints, descriptorSize(), CV_8U);  
            descriptors = _descriptors.getMat();  
        }  
  
          npoints = 512;  
        Point patternbuf[npoints];  
         Point* pattern0 = ( Point*)bit_pattern_31_;  
  
        ( patchSize != 31 )  
        {  
            pattern0 = patternbuf;  
            makeRandomPattern(patchSize, patternbuf, npoints);  
        }  
  
        CV_Assert( WTA_K == 2 || WTA_K == 3 || WTA_K == 4 );  
  
        ( WTA_K == 2 )    
            std::copy(pattern0, pattern0 + npoints, std::back_inserter(pattern));  
          
        {  
             ntuples = descriptorSize()*4;  
            initializeOrbPattern(pattern0, pattern, ntuples, WTA_K, npoints);  
        }  
    }  
  
    _keypoints.clear();  
     offset = 0;  
     ( level = 0; level < levelsNum; ++level)  
    {  
          
        vector<KeyPoint>& keypoints = allKeypoints[level];  
         nkeypoints = ()keypoints.size();  
  
          
         (do_descriptors)  
        {  
            Mat desc;  
             (!descriptors.empty())  
            {  
                desc = descriptors.rowRange(offset, offset + nkeypoints);  
            }  
  
            offset += nkeypoints;    
              
            Mat& workingMat = imagePyramid[level];  
              
            GaussianBlur(workingMat, workingMat, Size(7, 7), 2, 2, BORDER_REFLECT_101);  
            computeDescriptors(workingMat, keypoints, desc, pattern, descriptorSize(), WTA_K);  
        }  
  
          
         (level != firstLevel)    
        {  
             scale = getScale(level, firstLevel, scaleFactor);  
             (vector<KeyPoint>::iterator keypoint = keypoints.begin(),  
                 keypointEnd = keypoints.end(); keypoint != keypointEnd; ++keypoint)  
                keypoint->pt *= scale;   
        }  
          
        _keypoints.insert(_keypoints.end(), keypoints.begin(), keypoints.end());  
    }  
}  
/** Compute the ORB keypoints on an image 
 * @param keypoints the resulting keypoints, clustered per level 
  
  computeKeyPoints( vector<Mat>& imagePyramid,  
                              vector<Mat>& maskPyramid,  
                             vector<vector<KeyPoint> >& allKeypoints,  
                              nfeatures,  firstLevel,  scaleFactor,  
                              edgeThreshold,  patchSize,  scoreType )  
{  
     nlevels = ()imagePyramid.size();    
    vector<> nfeaturesPerLevel(nlevels);  
  
      
     factor = ()(1.0 / scaleFactor);  
     ndesiredFeaturesPerScale = nfeatures*(1 - factor)/(1 - ()pow(()factor, ()nlevels));  
  
     sumFeatures = 0;  
    (  level = 0; level < nlevels-1; level++ )     
    {  
        nfeaturesPerLevel[level] = cvRound(ndesiredFeaturesPerScale);  
        sumFeatures += nfeaturesPerLevel[level];  
        ndesiredFeaturesPerScale *= factor;  
    }  
    nfeaturesPerLevel[nlevels-1] = std::max(nfeatures - sumFeatures, 0);  
  
      
      
  
      
     halfPatchSize = patchSize / 2;             
    vector<> umax(halfPatchSize + 2);  
     v, v0, vmax = cvFloor(halfPatchSize * sqrt(2.f) / 2 + 1);  
     vmin = cvCeil(halfPatchSize * sqrt(2.f) / 2);  
     (v = 0; v <= vmax; ++v)             
        umax[v] = cvRound(sqrt(()halfPatchSize * halfPatchSize - v * v));  
      
     (v = halfPatchSize, v0 = 0; v >= vmin; --v)  
    {  
         (umax[v0] == umax[v0 + 1])  
            ++v0;  
        umax[v] = v0;  
           ++v0;  
    }  
  
    allKeypoints.resize(nlevels);  
  
     ( level = 0; level < nlevels; ++level)  
    {  
         featuresNum = nfeaturesPerLevel[level];  
        allKeypoints[level].reserve(featuresNum*2);  
  
        vector<KeyPoint> & keypoints = allKeypoints[level];  
  
          
        FastFeatureDetector fd(20, );        
        fd.detect(imagePyramid[level], keypoints, maskPyramid[level]);  
  
          
        KeyPointsFilter::runByImageBorder(keypoints, imagePyramid[level].size(), edgeThreshold);  
  
        ( scoreType == ORB::HARRIS_SCORE )  
        {  
              
            KeyPointsFilter::retainBest(keypoints, 2 * featuresNum);  
  
              
            HarrisResponses(imagePyramid[level], keypoints, 7, HARRIS_K);   
        }  
  
          
        KeyPointsFilter::retainBest(keypoints, featuresNum);  
  
         sf = getScale(level, firstLevel, scaleFactor);  
  
          
         (vector<KeyPoint>::iterator keypoint = keypoints.begin(),  
             keypointEnd = keypoints.end(); keypoint != keypointEnd; ++keypoint)  
        {  
            keypoint->octave = level;    
            keypoint->size = patchSize*sf;   
        }  
  
        computeOrientation(imagePyramid[level], keypoints, halfPatchSize, umax);    
    }  
}  
static computeOrientation( Mat& image, vector<KeyPoint>& keypoints,  
                                halfPatchSize,  vector<>& umax)  
{  
      
     (vector<KeyPoint>::iterator keypoint = keypoints.begin(),    
         keypointEnd = keypoints.end(); keypoint != keypointEnd; ++keypoint)  
    {  
        keypoint->angle = IC_Angle(image, halfPatchSize, keypoint->pt, umax);  
    }  
}  
static IC_Angle( Mat& image,   half_k, Point2f pt,  
                       vector<> & u_max)  
{  
     m_01 = 0, m_10 = 0;  
  
     uchar* center = &image.at<uchar> (cvRound(pt.y), cvRound(pt.x));  
  
      
     ( u = -half_k; u <= half_k; ++u)  
        m_10 += u * center[u];  
  
      
     step = ()image.step1();  
     ( v = 1; v <= half_k; ++v)      
    {  
          
         v_sum = 0;  
         d = u_max[v];  
         ( u = -d; u <= d; ++u)  
        {  
             val_plus = center[u + v*step], val_minus = center[u - v*step];  
            v_sum += (val_plus - val_minus);   
            m_10 += u * (val_plus + val_minus);  
        }  
        m_01 += v * v_sum;  
    }  
  
     fastAtan2(()m_01, ()m_10);  
}  
static computeDescriptors( Mat& image, vector<KeyPoint>& keypoints, Mat& descriptors,  
                                vector<Point>& pattern,  dsize,  WTA_K)  
{  
      
    CV_Assert(image.type() == CV_8UC1);  
      
    descriptors = Mat::zeros(()keypoints.size(), dsize, CV_8UC1);  
  
     ( i = 0; i < keypoints.size(); i++)  
        computeOrbDescriptor(keypoints[i], image, &pattern[0], descriptors.ptr(()i), dsize, WTA_K);  
}  
static computeOrbDescriptor( KeyPoint& kpt,  
                                  Mat& img,  Point* pattern,  
                                 uchar* desc,  dsize,  WTA_K)  
{  
     angle = kpt.angle;   
      
    angle *= ()(CV_PI/180.f);  
     a = ()cos(angle), b = ()sin(angle);  
  
     uchar* center = &img.at<uchar>(cvRound(kpt.pt.y), cvRound(kpt.pt.x));  
     step = ()img.step;  
  
  
  
        center[cvRound(pattern[idx].x*b + pattern[idx].y*a)*step + \  
               cvRound(pattern[idx].x*a - pattern[idx].y*b)]  
  
     x, y;  
     ix, iy;  
  
        (x = pattern[idx].x*a - pattern[idx].y*b, \  
        y = pattern[idx].x*b + pattern[idx].y*a, \  
        ix = cvFloor(x), iy = cvFloor(y), \  
        x -= ix, y -= iy, \  
        cvRound(center[iy*step + ix]*(1-x)*(1-y) + center[(iy+1)*step + ix]*(1-x)*y + \  
                center[iy*step + ix+1]*x*(1-y) + center[(iy+1)*step + ix+1]*x*y))  
  
  
    ( WTA_K == 2 )  
    {  
         ( i = 0; i < dsize; ++i, pattern += 16)  
        {  
             t0, t1, val;  
            t0 = GET_VALUE(0); t1 = GET_VALUE(1);  
            val = t0 < t1;  
            t0 = GET_VALUE(2); t1 = GET_VALUE(3);  
            val |= (t0 < t1) << 1;  
            t0 = GET_VALUE(4); t1 = GET_VALUE(5);  
            val |= (t0 < t1) << 2;  
            t0 = GET_VALUE(6); t1 = GET_VALUE(7);  
            val |= (t0 < t1) << 3;  
            t0 = GET_VALUE(8); t1 = GET_VALUE(9);  
            val |= (t0 < t1) << 4;  
            t0 = GET_VALUE(10); t1 = GET_VALUE(11);  
            val |= (t0 < t1) << 5;  
            t0 = GET_VALUE(12); t1 = GET_VALUE(13);  
            val |= (t0 < t1) << 6;  
            t0 = GET_VALUE(14); t1 = GET_VALUE(15);  
            val |= (t0 < t1) << 7;  
  
            desc[i] = (uchar)val;  
        }  
    }  
     ( WTA_K == 3 )  
    {  
         ( i = 0; i < dsize; ++i, pattern += 12)  
        {  
             t0, t1, t2, val;  
            t0 = GET_VALUE(0); t1 = GET_VALUE(1); t2 = GET_VALUE(2);  
            val = t2 > t1 ? (t2 > t0 ? 2 : 0) : (t1 > t0);  
  
            t0 = GET_VALUE(3); t1 = GET_VALUE(4); t2 = GET_VALUE(5);  
            val |= (t2 > t1 ? (t2 > t0 ? 2 : 0) : (t1 > t0)) << 2;  
  
            t0 = GET_VALUE(6); t1 = GET_VALUE(7); t2 = GET_VALUE(8);  
            val |= (t2 > t1 ? (t2 > t0 ? 2 : 0) : (t1 > t0)) << 4;  
  
            t0 = GET_VALUE(9); t1 = GET_VALUE(10); t2 = GET_VALUE(11);  
            val |= (t2 > t1 ? (t2 > t0 ? 2 : 0) : (t1 > t0)) << 6;  
  
            desc[i] = (uchar)val;  
        }  
    }  
     ( WTA_K == 4 )  
    {  
         ( i = 0; i < dsize; ++i, pattern += 16)  
        {  
             t0, t1, t2, t3, u, v, k, val;  
            t0 = GET_VALUE(0); t1 = GET_VALUE(1);  
            t2 = GET_VALUE(2); t3 = GET_VALUE(3);  
            u = 0, v = 2;  
            ( t1 > t0 ) t0 = t1, u = 1;  
            ( t3 > t2 ) t2 = t3, v = 3;  
            k = t0 > t2 ? u : v;  
            val = k;  
  
            t0 = GET_VALUE(4); t1 = GET_VALUE(5);  
            t2 = GET_VALUE(6); t3 = GET_VALUE(7);  
            u = 0, v = 2;  
            ( t1 > t0 ) t0 = t1, u = 1;  
            ( t3 > t2 ) t2 = t3, v = 3;  
            k = t0 > t2 ? u : v;  
            val |= k << 2;  
  
            t0 = GET_VALUE(8); t1 = GET_VALUE(9);  
            t2 = GET_VALUE(10); t3 = GET_VALUE(11);  
            u = 0, v = 2;  
            ( t1 > t0 ) t0 = t1, u = 1;  
            ( t3 > t2 ) t2 = t3, v = 3;  
            k = t0 > t2 ? u : v;  
            val |= k << 4;  
  
            t0 = GET_VALUE(12); t1 = GET_VALUE(13);  
            t2 = GET_VALUE(14); t3 = GET_VALUE(15);  
            u = 0, v = 2;  
            ( t1 > t0 ) t0 = t1, u = 1;  
            ( t3 > t2 ) t2 = t3, v = 3;  
            k = t0 > t2 ? u : v;  
            val |= k << 6;  
  
            desc[i] = (uchar)val;  
        }  
    }  
      
        CV_Error( CV_StsBadSize,    
}  

 

 

 

 

 

如有错误,欢迎指正和批评。有部分参考,因忘记记录,未标注下面,抱歉。

参考:

https://blog.csdn.net/Small_Munich/article/details/80866162

https://www.cnblogs.com/TransTown/p/7396996.html

 http://docs.opencv.org/3.3.0/d2/d29/classcv_1_1KeyPoint.html

https://blog.csdn.net/Darlingqiang/article/details/79404869

https://blog.csdn.net/frozenspring/article/details/78146076

https://blog.csdn.net/zcg1942/article/details/83824367

https://www.cnblogs.com/wyuzl/p/7856863.html

Guess you like

Origin www.cnblogs.com/urglyfish/p/12459650.html