opencv FAST detection algorithm

The foregoing description it comes corner detection, is actually a corner image feature points, for an image, the feature points are divided into three types including an edge, and the focus spots in the OPENCV, with corner detection , the following total image feature point detection method

The FAST
SURF
the ORB
BRISK The
KAZE
AKAZE
MESR
GFTT Good Tack Feature to
Bob spots
STAR
AGAST
  Next respectively, about which an image feature detection algorithm, but first requires an understanding of a data structure of OPENCV, KeyPoint structure, the header structure definition of as follows:

class KeyPoint

{

Point2f pt; // coordinate of the image feature point

float size; // feature neighborhood of diameter

float angle; // direction of the feature point, a value of [zero, $ 360), said they did not use a negative value, with this direction, allowing the feature point has a higher degree of recognition, or sometimes merely coordinates and diameter false feature points

float response; // level of response, the representative of the strong points of the degree, that is, the extent to which corner point, for later use and sorting

int octave; pyramid image feature point group where //

int class_id; // for cluster id

}

One of the ultimate goals of each image feature point detection algorithm, and when the image of a feature point is detected, it can be another matching feature points and determines the degree of similarity of the two images according to the similarity level.

For example, we can detect the feature points in the image of a human face, so as to retrieve whether there is a high degree of similarity of the feature point set in another drawing, thus confirming the position of another image of a human face and the face, like feature detection algorithm in object detection, visual tracking, time 3D reconstruction has an important role.

A Generic Interface image feature point detection
  Opencv order to facilitate the user is using the image feature point detection algorithm, all of the detected feature points are sealed in a similar API, called template Ptr class, that is, all feature detection algorithms achieve the same excuse, detect the image feature point detection using a method similar to.
  the Ptr <corresponding feature point detection class name> = variable name corresponding feature point detection based :: create ()
  variable name -> detect ( the original image, the feature point vector).
  using the algorithm described above, almost all can call image feature detection algorithm. Note, however, a plurality of overloaded functions Create function, if it is empty, each of the image detection algorithm will use their own a default initial value to initialize class, if you want to modify the parameters, then the function call when the need to create a different type of detection, provided different initialization variables.
  Further, while providing a fast OpenCV function displays the image feature points, as
  drawKeyPoints (canvas drawing result image feature point vector set, output the value of the color rendering, the rendering mode)
  in general, the canvas will use our detection feature image The original image (typically detects feature points are the original image is converted to gray scale image after the detection, the complexity of the algorithm is simple).
  Drawing mode can be selected in the following methods are enumerated DrawMatchesFlags
  DEFAULT: draw only feature point coordinate point, the image is displayed on the coordinates of a small dot, the center coordinates of each feature point are small dots.
  DRAW_OVER_OUTIMG: function does not create the output image, but to draw directly on the output image variable space itself requires variable output image is a good initialization of, size and type variables are initialized good
  NOT_DRAW_SINGLE a single point of the feature points are not drawn
  When drawing DRAW_RICH_KEYPOINT plotted feature point is a circle with a direction, this method also shows the coordinates, size, and orientation of the image, a characteristic drawing to show the best way, but the drawback is plotted results too messy.

A. FAST feature detection algorithm
FAST algorithm is based on image feature point detection angle.

The first step of a feature point detection algorithm is to define what is a feature point, FAST algorithm defining feature point is that if a pixel and other areas surrounding sufficient number of pixels in a different area, then the pixel point is a feature point, for grayscale images, i.e. different gray values ​​of gray value and the surrounding pixel points enough, then the pixel point is a feature point.

Detailed calculation step of the algorithm is as follows

Select a coordinate point from the picture, the pixel value of the acquired point, it is next determined whether the point is a feature point
selection to select a point coordinate as the center of a radius equal to three Bresenham circle (a circle trajectory computing discrete logarithms, resulting integral-circle track point), in general, the 16 points on the circle, as shown in FIG.
Here Insert Picture Description

Black point coordinates (0, 0), the coordinates of step 1

Now select a threshold is assumed to be t, a key step, it is assumed that 16 points, there are N consecutive pixels, the difference between the pixel value of their luminance value and the center point is greater than or less than t, then this point is a feature of point. (n-value of the general value of 9 or 12, 9 proved possible to achieve better results, since the feature point may be acquired more, when the subsequent process, relatively more amount of data samples).
Was added to each track point requires traversal, then the relatively long time required, there is a relatively simple method can be selected, it is checked only at four pixel positions 1,9,5 and 13 positions, a first detected position and position 9, if they than the threshold darker or lighter than the threshold value, then the detected position 5 and position 13, if P "role =" presentation "style =" word-wrap: normal; max-width: none; max-height: none ; min-width: 0px; min-height: 0px; float: none; "id =" MathJax-Element-8-Frame "> center point is a corner point, in the four pixels should be at least three greater than Ip + t "role =" presentation "style =" word-wrap: normal; max-width: none; max-height: none; min-width: 0px; min-height: 0px; float: none; "id = "MathJax-Element-9-Frame"> center point brightness value + threshold value or less than Ip-t "role =" presentation "style =" word-wrap: normal; max-width: none; max-height: none; min -width: 0px; min-height: 0px; float: none; "id =" MathJax-Element-10-Frame "> center point luminance value - a threshold value, because if the angle a More than three-quarters of the circular portion should be met if the determination condition is not satisfied, then p "role =" presentation "style =" word-wrap: normal; max-width:. None; max-height: none; min-width : 0px; min-height: 0px; float: none;
However, this detection method will bring a problem that is causing the feature point cluster effect, a plurality of feature points in an image repeated occurrence of high frequency, FAST algorithm proposed a non-maximal suppression approach to eliminate this situation, the following specific measures.
calculate its magnitude response (score function) VV for each detected feature points. VV is defined herein as the absolute deviation of 16 pixels and a center point and around it.
Consider two adjacent characteristic points, and to compare their value VV
lower VV value point will be deleted
  than is the rapid detection of feature points the principle of detecting the feature point API rapid detection algorithm defined in the following OPENCV

static Ptr create( int threshold=10, bool nonmaxSuppression=true,

                             int type=FastFeatureDetector::TYPE_9_16 );

It refers threshold value of the edge point and the center point of the track when compared to the third step is a threshold value t, the representative nonmaxSuppression whether a fifth step of non-maxima suppression, if the result of the detection of fast found clustered cases, it can be consider, the third parameter type values ​​from FastFeatureDetector enumeration, the following values:

TYPE_5_8 taken from the track 8 points, when there are five points satisfy the condition that the feature point.
TYPE_7_12 taking 12 points in the trajectory, seven conditions are satisfied, that is, the feature point.
TYPE_9_16 take track 16 points, when 9 condition is satisfied, is a feature point.
  in summary we can see, fAST detection algorithm does not multi-scale problems, so the calculation speed is relatively fast, but when more noise in the picture when an error occurs feature points more robust property is not good, and the effect of the algorithm is further dependent on a threshold value t. And no FAST FAST multiscale feature and the feature point is not the direction information, which loses rotational invariance. However, in the case of real-time requirements, such as video surveillance object recognition, can be used.

#include "stdafx.h"
#include<opencv2\opencv.hpp>
#include <opencv2/core/core.hpp> 
#include <opencv2/highgui/highgui.hpp> 
#include <opencv2/imgproc/imgproc.hpp> 
#include <opencv2/features2d/features2d.hpp>
#include<iostream>

using namespace std;
using namespace cv;

int main(int argc, char* argv[])
{
	Mat img = imread("1.jpg");
	Mat grayimg;
	cvtColor(img, grayimg, CV_RGB2GRAY);
	Ptr<FeatureDetector> fast = FeatureDetector::create("FAST");
	vector<KeyPoint> keypoint1;
	fast->detect(grayimg, keypoint1);
	Mat img2;
	drawKeypoints(grayimg, keypoint1, img2, Scalar::all(-1), DrawMatchesFlags::DEFAULT);
	imshow("结果图", img2);
	waitKey(0);
    return 0;
}

Here Insert Picture Description

Published 47 original articles · won praise 3 · Views 1424

Guess you like

Origin blog.csdn.net/weixin_42076938/article/details/105234654