Target detection (OD) - Performance

Online performance indicators for target detection introduce pretty much, but really hard to find a satisfactory article. In other words, when I read these articles introduction of indicators (such as FP), I still do not know a specific question I should be how to count. The reason introduce some articles index is calculated directly from the "classification" indicators mechanically to the "test questions" or some authors themselves and do not quite understand. Here I recommend a github resources on, which is very clear and reasonable explanation (in English): Object-Detection-Metrics

1. Basic indicators
  • TP : correct detection (IOU> threshold)
  • The FP : false positive error detection (IOU <threshold); It is noted that if a plurality of detections with the ground truth overlap and IOU> threshold, then only the largest IOU corresponding to detection of TP, FP ~ rest are
  • FN : missed (not detected ground truth)
  • TN : do not apply, because the meaning of the terms, TN is the correct judgment on the negative samples (usually the background), but also rapidly "corrected misdetection"; which target detection task is numerous; also why, we do not calculate TN ~
  • threshold : typically 50%, 75%, 95%, etc.
2. Advanced Index
  • Precision : accuracy (precision), "the recognition target" correct rate
    Here Insert Picture Description
  • Recall ability to recall (recall), "identify all objectives":
    Here Insert Picture Description
3. Ultimate index
  • PR curve : PR curve in the vertical axis Precision, Recall the abscissa, the threshold changing the score to obtain different (recall, precision) point , so as to draw a curve fitting. PR curve is to use models weigh between Precision and Recall; or, FP is too large or too FN mostly not a good model -
    here more say a few words , the detection task, we will output detection frame confidence (confidence score); we assume a confidence threshold σscore = 0.9, so we only keep score> detections 0.9, and has been a pair of children (recall, precision); when we change σscore , we reserve under the detections may become more, and get a new (recall, precision) ... ...; by continuing this work, we can get enough points so fitted curve PR - ( Also note that , by the precision and recall of the formula we can see out, with the change σscore, precision denominator is constantly changing (increase), while the denominator has been a constant for the recall of Truth Ground)
    (original in the following figure Zhou Zhihua teacher "machine learning" in 31)
    Here Insert Picture Description
  • The AP : Average Precision, i.e. at different recall, the maximum average of precision.
    • Interpolation 11 : 11 is an isometric interpolation taking 11 points on recall [0, 0.1, ..., 1], in each of
      the r, taking "maximum precision whose recall value is greater than r"
      Here Insert Picture Description
    • All interpolated points : Similarly, we can use all points instead of "11"; in this case, we can use the PR curve the AUC (Area The an under Curve) replaced approximately AP. (See figure below, the red dotted line area that is so-called AUC)
      Here Insert Picture Description
      Here Insert Picture Description
      with respect to this part of the article being introduced so much more you can see the github link given at the beginning of the article, which has a very simple example of meaningful, as well as source code, instructions, etc. ~
Published 52 original articles · won praise 4 · Views 2161

Guess you like

Origin blog.csdn.net/qq_42191914/article/details/103375512