Study Notes 3 - Evaluation Indicators for Target Detection (AC, TP, FP, FN, TN, AP, ROC, AUC, mAP, [email protected], [email protected]:.95)

Evaluation indicators for target detection:

I originally wanted to sort out a wave of evaluation indicators, but after searching for information, I found that several bloggers have compiled very comprehensively, so I copied it here, Xixi~~

Target Detection - Evaluation Index - Deep Machine Learning - Blog Park (cnblogs.com)

Python implements confusion matrix - Zhihu (zhihu.com)

1. Accuracy

  The ratio of correct samples in all samples is the accuracy rate, which is the most common evaluation index in the system.

    The accuracy rate is generally used to evaluate the global accuracy of the model, and cannot fully evaluate the performance of a model.

2. Confusion Matrix

The horizontal axis in the confusion matrix is ​​the category statistics predicted by the model, and the vertical axis is the classification statistics of the true labels of the data. The diagonal line represents the number of consistent model predictions and data labels, so the sum of the diagonal lines divided by the total number of test sets is the accuracy rate. The larger the number on the diagonal, the better, and the darker the color in the visualization results, the higher the prediction accuracy of the model in this category. If you look at it by row, each row that is not on the diagonal is the wrongly predicted category. Therefore, the higher the diagonal, the better, and the lower the off-diagonal, the better.

preview

 3. Precision and Recall

insert image description here

TP: Positive samples are predicted as samples

TN: Negative samples are predicted as negative samples

FP: Negative samples are predicted as positive samples

FN: Positive samples are predicted as negative samples

P (precision rate, precision rate): The proportion of predicted positive samples that are actually positive samples.

R (recall rate, recall rate): The proportion of all positive samples that are actually predicted as positive samples.

PR curve (precision rate-recall rate curve):

 The PR graph intuitively shows the recall rate and precision rate of the learner in the overall sample. When comparing, if the PR curve of one learner is completely "enclosed" by the curve of another learner, it can be asserted that The performance of the former is better than that of the former. The size of the area under the PR curve, to a certain extent, represents the relatively "double high" ratio of the learner in terms of precision and recall.

4. ROC and AUC

  • Abscissa: False positive rate (False positive rate, FPR), FPR = FP / [ FP + TN], representing the probability of wrongly predicting positive samples in all negative samples, false alarm rate;
  • Vertical axis: True positive rate (TPR), TPR = TP / [TP + FN], representing the probability of correct prediction in all positive samples, the hit rate.
  • The diagonal corresponds to the random guess model, while (0,1) corresponds to the ideal model where all collations rank before all negative examples. The closer the curve is to the upper left corner, the better the performance of the classifier. The ROC curve has a good property: when the distribution of positive and negative samples in the test set changes, the ROC curve can remain unchanged. Class imbalance often occurs in real data sets, that is, there are many more negative samples than positive samples (or vice versa), and the distribution of positive and negative samples in the test data may also change over time.
  • AUC (Area Under Curve) is the area under the ROC curve. The closer the AUC is to 1, the better the performance of the classifier.

5、mAP、[email protected][email protected]:.95

Average-Precision (AP), mean Average Precision (mAP)

AP is the area under the PR curve, and a better classifier has a higher AP value.

If it is a multi-category target detection task, use mAP, which is the average of multiple categories of AP. The meaning of this mean is to average the AP of each class to obtain the value of mAP. The size of mAP must be in the [0,1] interval, and the larger the better. This indicator is the most important one in the target detection algorithm.

[email protected] means that the value of IOU is 50%, and AP70 is the same

[email protected]:.95 means that the value of IOU is taken from 50% to 95%, with a step size of 5%, and then calculated as the average value of AP under these IOUs

Guess you like

Origin blog.csdn.net/daige123/article/details/121648894