Machine Learning - Evaluation Metrics

Organized from:

https://blog.csdn.net/woaidapaopao/article/details/77806273?locationnum=9&fps=1

  • good or bad classifiers

1. The quality of the classifier

First of all, we must know four kinds of TP, FN (true judged to be false), FP (false judged to be true), and TN (you can draw a table). 
Several commonly used indicators:

    • Precision precision = TP/(TP+FP) = TP/~P (~p is the number of true predictions)
    • Recall recall = TP/(TP+FN) = TP/ P
    • F1 value: 2/F1 = 1/recall + 1/precision
    • ROC curve: The ROC space is a plane represented by a two-dimensional coordinate system with the false positive rate (FPR, false positive rate) as the X axis and the true positive rate (TPR, true positive rate) as the Y axis. Wherein true positive rate TPR = TP / P = recall, false positive rate FPR = FP / N

 

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326630189&siteId=291194637