Accuracy, precision, recall and F value in some algorithms

table of Contents

1. Accuracy

2. Precision

3. Recall

4.F值(F-measure)

Use sklearn in Python to achieve the above formula calculation


Take the classification problem as an example

  Label 1 Label 0
Prediction 1 TP FP
Prediction 0 FN TN

1. TP (True Positive) true example : indicates that the prediction is true, and the actual is also true

2. FP (False Positive) false positive example : indicates that the prediction is true but the actual is false

3. TN (True Negetive) example : indicates that the prediction is false, but the actual is false

4. FN (False Negetive) false negative example : indicates that the prediction is false but the actual is true

1. Accuracy

That is , the ratio at which all predictions are correct :ACC=\frac{TP+TN}{TP+FP+FN+TN}

2. Precision

The precision rate, the ratio of the correct prediction being positive to the total prediction being positive, and the total prediction being the ratio of the actual label being 1:Precision=\frac{TP}{TP+FP}

3. Recall


Recall rate, the correct prediction is the proportion of all positive samples . The actual label is the ratio of 1 which is correctly predicted to be 1:rechall=\frac{TP}{TP+FN}

4.F值(F-measure)


F-measure is the weighted harmonic average of Precision and Recall:F=\frac{(\alpha ^{2}+1)P*R}{\alpha ^{2}(P+R)}

When \alpha =1the timeF=\frac{2*P*R}{P+R}

Use sklearn in Python to achieve the above formula calculation

method one

# 构建混淆矩阵
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test_labels, y_pred_labels)
 
# 精准率与召回率
from sklearn.metrics import accuracy_score, precision_score, recall_score
print(accuracy_score(y_test_labels, y_pred_labels))
print('-========')
print(precision_score(y_test_labels, y_pred_labels))
print('-========')
print(recall_score(y_test_labels, y_pred_labels))
 
# f1 score
from sklearn.metrics import f1_score
f1_score(y_test_labels, y_pred_labels)

Method Two

from sklearn.metrics import precision_recall_fscore_support
precision, recall, f1score, support = precision_recall_fscore_support(y_true, y_pred, beta=1.0, labels=None, 
pos_label=1, average=None, warn_for=(‘precision’, ’recall’, ’f-score’), sample_weight=None)

 

Guess you like

Origin blog.csdn.net/zangba9624/article/details/105954268