深度学习模型评估指标(http://scikit-learn.org/stable/)

在机器学习中,对模型的测量和评估同样至关重要。只有选择与问题相匹配的评估方法,我们才能够快速的发现在模型选择和训练过程中可能出现的问题,迭代地对模型进行优化。

常见的模型评估指标:

precision

recall

F1-score

PRC

ROC/AUC

IOU

(1)Next,i will introduce confusion matrix:

TP+FN+FP+FN=the whole sample

 (2)PRC(Precision Recall Curve):[x:Recall,y:Precision]

(A>B>C) 

(3)

F1 is a metric that consider both precision and recall,better than BEP.

(4)ROC and AUC

a.ROC

X:FPR,Y:TPR

TPR means Recall,FPR means sample predicted positive from actual neg.

AUC:the aera under ROC.If the classifer is enough good,the AUC=1,the higher AUC,the better classifier.

How to choose the evaluating metric?

(5)IOU:

附:

(1)precision with python

fT=file(r'C:\\Users\\hp\\Desktop\\trainlabel.txt','r')
fP=file(r'C:\\Users\\hp\\Desktop\\predictlabel.txt','r')
    
fTLines = fT.readlines();
fPLines = fP.readlines();
train_label = [0 for x in range(len(fTLines))]
predict_label = [0 for x in range(len(fPLines))]

count = 0
for eachline in fTLines:
    eachline = eachline.strip('\n')
    train_label[count] = eachline
        count=count+1

count = 0
for eachline in fPLines:
    eachline = eachline.strip('\n')
        predict_label[count] = eachline
        count=count+1

from sklearn import metrics
p=metrics.precision_score(train_label,predict_label,pos_label='1')

(2)roc/auc with python

import numpy as np

from sklearn.metrics import roc_curve

y = np.array([1,1,2,2])

pred = np.array([0.1,0.4,0.35,0.8])

fpr, tpr, thresholds = roc_curve(y, pred, pos_label=2)

print(fpr)

print(tpr)

print(thresholds)

from sklearn.metrics import auc

print(auc(fpr, tpr))

(3)recall with python

from sklearn.metrics import recall_score

y_true = [0, 1, 2, 0, 1, 2]

y_pred = [0, 2, 1, 0, 0, 1]

recall_score(y_true, y_pred, average='macro')

0.33...

recall_score(y_true, y_pred, average='micro')

0.33...

recall_score(y_true, y_pred, average='weighted')

0.33...

recall_score(y_true, y_pred, average=None)

array([1., 0., 0.])

猜你喜欢

转载自blog.csdn.net/dinry/article/details/83346318