Random sampling is consistent (RANSAC, Random Sample Consensus)

Random sampling is consistent (RANSAC, Random Sample Consensus)

Foreword

Uniform random sampling (the RANSAC) is an iterative method, the mathematical model parameters can be estimated from a set of observed data comprises outlier (Outliers), the outliers without impact on the estimated value. Thus, it can also be interpreted as outlier detection methods. In a sense, it is a non-deterministic algorithm to produce a reasonable probability of a certain result, and with the number of iterations increases, the probability increases. RANSAC first proposed by Fischler and Bolles in 1981, solved the problem of determining the position (LDP, Location Determination Problem,See Appendix A Description of LDP1 2

Briefly, the most common by RANSAC algorithm is likely within a group or set of data values (inliers), exclude outliers, or fitting a Robust estimation model 3 .

So, RANSAC may also be understood as an idea - to exclude erroneous data that may exist, to estimate the model parameters or do some other things, such as the image feature point matching. This with active learning (Active Learning) thinking a bit in common. Active learning to find as few marked point training model, as RANSAC iterative search for the inliers. And there are two processes for determining outliers and operation. Active Learning ideology of some more complex.

Back RANSAC, that there are some basic assumptions we need to know.

  • The entire data set by a value within a cluster (inliers) and outliers (Outliers) composition;
  • Group distribution parametric model values ​​can be used to explain, although the presence of noise;
  • OutliersNot suitable for modelExplain its extreme value, measurement errors from noise, erroneous assumptions about the data and the like.

Even if these assumptions are not true for data collection, namely the absence of outliers, it does not affect the RANSAC parameters of the model estimates. Because in this case, the RANSAC iterative process, the entire population of the entire data value within the susceptor (inliers), and then estimate the model parameters. Then we will look at some of RANSAC algorithm is iterative process of how to conduct.

algorithm

Wiki content directly conveyed 1 , the way a paste KTH courseware 4 FIG on, describe two slightly different, but the general idea is the same.

1. Description

  1. Randomly selecting a data set from the subset, referred == "hypothesis inliers (hypothetical inliers)" ==, a consistent set (consensus set) an initial sample set;
  2. Estimate or a training model, fitting the above subsets;
  3. Loss function based on certain (loss function) or rule set data from the remaining data samples, can select data samples conform Jiaoyou model, add to a consistent set . For example, the model is a linear equation, if the remaining data is present in the sample to the straight line distance is less than the threshold value T H TH data samples thatThe data sample is consistent with the model, Into a consistent set ; a consistent set of data points (consensus set) in the group is within a value (inliers), the remaining outliers (Outliers);
  4. When a consistent set there is enough data samples that estimation model 2 reasonable enough;
  5. Use a consistent set of all data samples re-estimation model.
  6. Repeating the above procedure, the final model returns the smallest error, or a model comprising up inliers.
    Here Insert Picture Description

2. pseudocode

Given:
    data – A set of observations.
    model – A model to explain observed data points.
    n – Minimum number of data points required to estimate model parameters.
    k – Maximum number of iterations allowed in the algorithm.
    t – Threshold value to determine data points that are fit well by model.
    d – Number of close data points required to assert that a model fits well to data.

Return:
    bestFit – model parameters which best fit the data (or nul if no good model is found)

iterations = 0
bestFit = nul
bestErr = something really large

while iterations < k do
    maybeInliers := n randomly selected values from data
    maybeModel := model parameters fitted to maybeInliers
    alsoInliers := empty set
    for every point in data not in maybeInliers do
        if point fits maybeModel with an error smaller than t
             add point to alsoInliers
    end for
    if the number of elements in alsoInliers is > d then
        // This implies that we may have found a good model
        // now test how good it is.
        betterModel := model parameters fitted to all points in maybeInliers and alsoInliers
        thisErr := a measure of how well betterModel fits these points
        if thisErr < bestErr then
            bestFit := betterModel
            bestErr := thisErr
        end if
    end if
    increment iterations
end while

return bestFit

example

Talk is cheap, so me the codes!5

1. Peter Kovesi write MATLAB code

Reference 5 RANSAC section on fitting. The other code on the god's image processing is also very good.

RANSAC line fitting (Python)

To be completed, whipping my week must finish.

Appendix A

LDP following problems FIG. 2 , theHere Insert Picture Description

Given a 3D control points (control points) the spatial coordinate system or landmark points (Landmarks) collection, total m m points, which are known 3D coordinates; Given an image, the image comprising m m projected control points; determining an image (i.e. image origin, or a camera) in the above-described 3D coordinate space coordinate system.

In fact, the camera pose estimation, is the classic problem of SLAM - 3D-2D: PnP

Reference


  1. Random sample consensus from Wiki ↩︎ ↩︎

  2. Fischler, Martin A., and Robert C. Bolles. “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography.” Communications of the ACM 24.6 (1981): 381-395. ↩︎ ↩︎

  3. Borkar, Amol, Monson Hayes, and Mark T. Smith. “Robust lane detection and tracking with ransac and kalman filter.” 2009 16th IEEE International Conference on Image Processing (ICIP). IEEE, 2009. ↩︎

  4. KTH Regression courseware ↩︎

  5. MATLAB and Octave Functions for Computer Vision and Image Processing ↩︎

Released seven original articles · won praise 37 · views 4021

Guess you like

Origin blog.csdn.net/weixin_44278406/article/details/105098424