(A) extracting feature

Feature Extraction
       Feature type in the image field is divided into point, line, plane. Line features and surface characteristics synthesized using additional image information, and thus of its higher resolution. Unfortunately, due to the line characteristics and face feature extraction conditions are harsh, therefore, not widely in practical applications. (Although the bit line SLAM bound instance, in the case where the image texture is weak, the line features may play a more useful, but it is at the same time increasing the amount of calculation, improved performance is more limited.) With depth learn and improve in terms of image, based on the full view of the learned feature vectors continuously improve performance, even beyond the manual design of the feature points. It is also said earlier, because of the image information using a more comprehensive, which makes the recognition performance feature vectors getting better and better.
      Visual SLAM, the feature point feature mainly used.
 
>>  point features can be divided into two categories:
1. Based on manual design feature points:
With such features is the main point of the geometry as well as some people recognize a mathematical image of a particular region of a function modeling are described. A typical example of the SIFT, in comparison with the image Gaussian pyramid particular point difference calculation, then this point is described by its domain information to obtain the final function described.
2. Based on the depth of the learned feature vectors:
Such feature point depth learning network is mainly designed so as to obtain useful image information by a plurality of convolution operations and pooled. Relatively speaking abstract, I understand, although the depth study looks like a big box, but in fact, is a function $ f_ {i} (x) $, the final output is a function of a plurality of individual volumes of the base layer composite function value of $ G = f_ {N} (f_ {N-1} (... f_ {1} (x) ...)) $. Early networks designed primarily to imitate manual calculation step descriptors, each manual operation of the base layer volume descriptor no different. Mainly late to take advantage of more image information, to make up for the lack of hand-descriptors. (The design of such networks is typically a manual tile design feature points as the training set, the input to the network training).

 

>> different points of difference features:
   A feature point actually contains two types:
  A key point is that only (interested point), typically such as FAST corners;
  2. Another type is based on the location of the corner, using its neighborhood information be described, to obtain descriptors, it is mainly used in SLAM such descriptors such as SIFT, SURF, ORB and the like.
 
>> feature extraction defects:
  Directly on the image feature points are extracted, usually there will be feature points get together phenomenon. Obviously, for texture and more places, feature point extraction much more naturally. Result of all this is, particularly certain a position to extract image feature points, while other regions extracted is particularly low, or no. This will lead to practical application estimates the posture changing relatively large deviation occurs, positioning accuracy. This is what we tried to avoid.

 

>>  feature extraction tips:
  By the image block, extracts feature points for different image sub-blocks, can solve the problem of uneven distribution of feature points.
   ORB-SLAM to extract feature points ORB Example:
  1. The image is divided into $ I $ $ $ 4 regions $ {s_1, s_2, s_3, s_4} $, feature points are extracted for each area;
  2. FIG sub $ {s_1, s_2, s_3, s_4} $ $ 4 $ are each divided into regions, and now we have a total of $ 16 $ subgraphs, $ 1 $ Step extracted feature points assigned by region;
  3. For more than one sub-picture feature point exists, is further divided, until a minimal limit stop of FIG. If the number of feature points is still large at this time, according to the response value, taking the maximum of the characteristic points;
  Step 4. When the $ 1 $, the small number of each subgraph extracted feature points, the need to adjust the threshold conditions, to increase the number of feature points.
  Step 5 $ $ 1-3 can be accomplished by continuously recursion.
 
PS: I own implements a simple feature extraction function , but not the image finer division recursive. Simply divided into several regions, each region for extraction. Welcome reference.
 

Guess you like

Origin www.cnblogs.com/yepeichu/p/12468228.html