Data Classification Analysis--Classifier Evaluation Method


1. Basic concepts

True/True Positives (TP): Correctly predicted positives
True Negatives/True Negatives (TF): Correctly predicted
negatives False Positives/False Positives (FT): Wrongly predicted positives, actual Negative
False Negative/False Negative (FP): Wrongly predicted as negative, actually positive

2. Confusion Matrix

insert image description here
Among them, the first row can be regarded as the actual positive, the second row can be regarded as the actual false,
the first column can be regarded as the predicted positive, and the second column can be regarded as the predicted false.

Accuracy rate, recognition rate: (TP + TN) / (TP + FN + FP + TN)
error rate: (FP + FN) / (TP + FN + FP + TN)
recall rate (in the actual positive example, the prediction is positive Proportion of examples): TP / (TP + FN)
Accuracy rate (proportion of actual positive examples predicted as positive examples): TP / (TP + FP)
F score: (2 * recall * precision) / (precision + recall )


3. Multi-classification model evaluation index

Macro Average:
The arithmetic mean of each statistical index value of all categories, such as macro precision, macro recall, and macro F value.
insert image description here
Micro Average (micro average)
combines all categories of TP, FP, TN, and FN respectively to establish a global confusion matrix, corresponding micro-precision rate, micro-recall rate, and micro-F value.
insert image description here

Guess you like

Origin blog.csdn.net/weixin_47250738/article/details/125460271