[Learning] Precision and Recall depth evaluation to understand

1. four cases

Precision accuracy rate, Recall recall, is a common evaluation binary classification. Confusion matrix as follows:

Positive results were positive forecast Negative predictions false positive
True predictions to be true TP TN
False predictions are false FP FN

Often focus on class as positive class, other classes are negative category. (Dogs and cats to binary, for example, are concerned about the dog's precision and recall)

TP Positive class prediction is positive category (predicted pictures of dogs dog is actually marked)
FN Positive class prediction is negative category (cat pictures predicted actual label is a dog)
FP Negative class prediction is positive category (predicted pictures of dogs actually marked a cat)
TN Negative class prediction is negative category (cat pictures predicted label is actually a cat)

T, F is the representative of the picture corresponding label

P, N is the representative of the picture predicted results

2. Precision

Accuracy rate is calculated:
\ [P = \ {FRAC the FP + TP TP} {} \]
understood that:

TP + FP: That is all Positive, which is predicted picture is the picture of the number of positive class

TP: That is also positive class is predicted to be positive picture of the number of classes

In short: the proportion of correct prediction of the number of pictures of the total number of positive class prediction (prediction from the perspective of how many predictions are accurate)

3. Recall

Recall formula:
\ [R & lt = \ {FRAC TP TP + FN} {} \]
understood that:

TP + FN: that is, to fully meet all of the picture marked number of pictures

TP: positive class is predicted to be the number of positive class picture

In short: Determine the positive class is predicted to account for the number of all positive class pictures marked images (from a label perspective, how many were recalled)

Example 4. dichotomous

Or in cats and dogs binary, for example, the test set a total of 20 dogs, 20 cats pictures marked picture (a dog as a positive example), the model predicts which has 16 dog pictures, 14 pictures marked indeed dog, leaving two pictures labeled as a cat.

Positive Negative All
True TP: 14 TN: 6 20
False FP: 2 FN:
All 16

It is possible to calculate a
\ [precision = \ frac {TP } {TP + FP} = \ frac {14} {14 + 2} \]

\[ recall = \frac{TP}{TP+FN} = \frac{14}{40} \]

Example 5. Multiple Classifiers

The reference from Example: https://www.itcodemonkey.com/article/9521.html

Class1 Actual_Class1 Actual_Class2 Actual_Class3
Predicted_Class1 30 20 10
Predicted_Class2 50 60 10
Predicted_Class3 20 20 80

For example, we calculate for class2:

class2-TP: label class2, predicted class2 = 60

class2-TN: label class2, prediction not class2 = 20 + 20 = 40

class2-FP: label is not class2, predicted class2 = 50 + 10 = 60

class2-FN: tag is not class2, predictions nor class2 = 30 + 10 + 20 + 80 = 140

6. Other indicators

F1 recall and precision values of the harmonic mean is:
\ [\ FRAC of F_1} {2} = {\ FRAC. 1} {P} + {\ R & lt FRAC {} {}. 1 \]

\[ F_1 = \frac{2TP}{2TP+FP+FN} \]

Guess you like

Origin www.cnblogs.com/pprp/p/11241954.html