Summary of common evaluation indicators for machine learning

classification task

TP (True Positive) True example: A positive sample that is predicted to be positive by the model.

FP (False Positive) False Positive: False samples predicted to be positive by the model.

FN (False Negative) False Negative: A positive sample that is predicted to be negative by the model.

TN (True Negative) true negative example: a negative sample that is predicted to be negative by the model

Accuracy

 ​  Accuracy = \frac {n_{correct}}{n_{total}}

where n_{correct}is the number of correctly classified samples and n_{total}is the total number of samples.

This indicator is susceptible to the impact of the sample size and whether the sample is balanced.

F1 score

Precision: refers to the ratio of the number of correct positive samples to the number of samples judged as positive samples by the classifier.

Recall (Recall): refers to the proportion of the number of correctly classified positive samples to the number of true positive samples.

F1 score is the harmonic mean of precision and recall,

F1 = \frac {2 x precision x recall} {precision + recall}

RMSE

It is used to measure the quality of the regression model.

RMSE = \sqrt {\frac {\sum_{i=1}^n (y_{i} - \hat y_{i})^2} {n}}

Among them, y_iis ithe actual value of the th sample point, \hat y_iis ithe predicted value of the th sample point, and nis the number of sample points.

In general, RMSE can well reflect the degree of deviation between the predicted value of the regression model and the true value. But in practical problems, if there are individual outliers (Outliers) with a very large degree of deviation, even if the number of outliers is very small, the RMSE index will become very poor.

update later

Guess you like

Origin blog.csdn.net/jcl314159/article/details/119062632