Notes on Target Detection Metrics

Object Detection Metrics

Although I am doing target detection, the meaning and calculation method of the indicators are always unclear and easy to forget. Write a note to record it.

confusion matrix

Some explanations on TP, FP, FN, and TN on the Internet become dizzy the more I look at them, the following is my own understanding.

The previous T or F represents whether the model predicts correctly or incorrectly this time .

The following P or N represents whether the model predicts a positive or negative example this time .

Take judging the quality of watermelon as an example:

TP : The model predicts that this watermelon is a good melon (P, positive), and it is actually a good melon (T, the prediction is correct, True).

FP : The model predicts that the watermelon is a good melon (P, positive), but it is actually a bad melon (F, wrong prediction, False).

TN : The model predicts that the watermelon is a bad melon (N, Negative), but it is actually a bad melon (T, the prediction is correct, True).

FN : The model predicts that the watermelon is a bad melon (N, Negative), but it is actually a good melon (F, wrong prediction, False).

accuracy

a c c = T P + T N T P + T N + F P + F N acc = \frac{TP + TN}{TP + TN + FP + FN} acc=TP+TN+FP+FNTP+TN

It is the ratio of the number of results correctly predicted by the model to all predicted results of the model .

Precision rate (precision rate precision) P

p r e c i s i o n = T P T P + F P precision = \frac{TP}{TP + FP} precision=TP+FPTP

The model predicts the probability of a correct prediction for a class. The focus is on how many of the results predicted by the model are correctly predicted.

Recall rate (recall rate recall) R

r e c a l l = T P T P + F N recall = \frac{TP}{TP + FN} recall=TP+FNTP

The model correctly predicts the number of samples of a certain class, accounting for the number of total positive samples. The focus is on how much the model predicts to cover the positive samples.

F1

F ( k ) = ( 1 + k ) ∗ P ∗ R ( k ∗ k ) ∗ P + R F(k) = \frac{(1 + k) * P * R}{(k * k) * P + R} F(k)=(kk)P+R(1+k)PR

F1 is F(1)
F 1 = 2 ∗ P ∗ RP + R F1 = \frac{2 * P * R}{P + R}Q1 _=P+R2PR
F1 is used to distinguish the advantages and disadvantages of the algorithm, and the priority of missed detection is high, followed by false detection.

PR curve

The vertical axis is precision, and the horizontal axis is a graph of Recall. The calculation method is to calculate a class and sort the scores of the model prediction results. Threshold each score separately to obtain new TP, FP, and FN, and calculate P and R respectively. Get the PR curve of this class.

AP and mAP

AP (Average Precision): Average precision, two algorithms, one is to find the PR curve area after the PR curve is smoothed. One is to divide R into 11 points on average, and calculate the mean value of the corresponding P.

mAP (mean AP): Mean AP of each category.

Guess you like

Origin blog.csdn.net/qq_36571422/article/details/130607708