[Machine learning performance evaluation index]

Machine learning performance evaluation index

TP、TN、FP、FN

FN: False (false) Negative (counter-example), the model determines the sample is negative patients, but the models are wrong, in fact, a positive sample. (False negative rate)
FP: False (false) Positive (positive example), the model determines the sample as a positive example, but the models are wrong, in fact, negative samples. (False positive rate)
TN: True (true) Negative (negative cases), the model determines the sample is negative cases and, indeed, negative samples to determine the model is right.
TP: True (true) Positive (positive example), the model determines the sample as a positive example and, indeed, positive samples, the model judge is right.

 

Precision

Chinese called the exact rate / precision ratio , considered a model representing a positive example (TP + FP), the real judge the correct proportions.

$$P=\frac{TP}{TP+FP}$$

 

Recall

Chinese called the recall / recall , samples representing true positive cases (TP + FN), the proportion of true judgment correct.

$$\frac{TP}{TP+FN}$$

 

PR curve

With precision rate and the vertical axis, horizontal axis to recall was drawn curve.

 

 

 

 

F-Measure

It is the harmonic mean of precision and recall

$$F=\frac{(\alpha^{2}+1)P*R}{\alpha^{2}(P+R)}$$

When $ \ alpha = 1 $, the index recorded as F1:

$$F1=\frac{2PR}{P+R}$$

 

ROC、AUC

ROC curve was to evaluate the generalization ability of learning indicators, he and the vertical axis "real rate cases" (TPR), and the horizontal axis is the "false positive rate cases" (FPR)

TPR = TP / (TP + FN)

FPR = FP / (TN + FP)

AUC, is wrapped under the area under ROC curve, ROC curve area.

Guess you like

Origin www.cnblogs.com/4PrivetDrive/p/12174030.html