04- Evaluation index mAP (target detection)

Main points:

  • Precision  (accuracy rate): TP/(TP+FP) , that is, the proportion of all predictions given by the model that hit the real target .
  • Recall  (recall rate): TP/(TP+FN) , the ratio of the correct target found to all correct targets .

Official documentation: https://cocodataset.org/#detection-eval

Reference article: Calculation of mAP


A common indicator in target detection:

  • TP  (True Positive): A correct detection , the detected IOU ≥  threshold . That is, the number of predicted bounding boxes that are correctly classified and have correct bounding box coordinates .
  • FP  (False Positive): A wrong detection , the detected IOU <  threshold . That is, the number of classification errors in the predicted bounding boxes or the number of bounding box coordinates that do not meet the standard, that is, the number of remaining bounding boxes except for the predicted correct bounding boxes among all the predicted bounding boxes .
  • FN  (False Negative): A ground truth that has not been detected . The number of all unpredicted bounding boxes, that is, the correct bounding box (ground truth) removes the predicted correct bounding box, and the number of remaining bounding boxes .
  • Precision  (accuracy/accuracy): "Precision is the ability of a model to identify  only  the relevant objects", the accuracy is the ability of the model to only find relevant objects, equal to TP/(TP+FP) . That is, the proportion of hitting the real target among all the prediction results given by the model .
  • Recall  (recall rate): "Recall is the ability of a model to find all the relevant cases (all ground truth bounding boxes)", the recall rate is the ability of the model to find all relevant targets , equal to TP/(TP+FN ). That is, how many real targets can be covered by the prediction results given by the model, and the ratio of the correct target found to all correct targets .
  • Generally speaking, for the task of multi-category target detection, the number of TP, FP, and FN of each category will be calculated separately, and the Precision and Recall of each category will be further calculated.
  • score, confidence : The score/confidence of each predicted bounding box, expressed differently in different papers. That is, your model not only needs to give the classification results and boundary coordinates of each predicted bounding box, but also gives the possibility that the bounding box contains the target. A high score/confidence means that the model thinks it is more likely to contain the target , or The model prefers bounding boxes with high score/confidence.
  • Precision x Recall curve  ( PR curve ): A curve formed by connecting all precision-recall points (generally, a separate PR curve is created for each category of prediction boxes). As for why there are multiple precision values ​​under the same recall value, you can see the explanation in the following part.
  • AP  (average Precision): average precision , the mean value of the highest precision under different recalls (generally, the respective APs are calculated for each category).

  • mAP  (mean AP): The mean of the average precision , the mean of each category of AP.

COCO Evaluation Result:

Guess you like

Origin blog.csdn.net/March_A/article/details/130549566