Evaluation of classification algorithms

Accuracy

Accuracy is our most commonly used evaluation index, which is the proportion of correct predictions in all instances, but when there is an imbalance in the data, the accuracy cannot fully evaluate the performance of the model.

 

Accuracy

The accuracy rate indicates the proportion of the actual examples that are classified as positive examples.

 

Recall rate

 

Confusion matrix

True Positive (TP): predict the number of positive classes as positive classes

True Negative (TN): predict the number of negative classes as negative classes

False Positive (FP): The number of negative classes predicted as positive classes (Type I error)

False Negative (FN): The number of positive classes predicted as negative classes (Type II error)

Model tuning parameters

Cross-validation

In order to allow the data to be verified and trained

Training data (training + verification)

K-fold on cross-validation


Grid search

Each parameter will have a viewing effect, and select the parameters with good effects

Combination of parameters

Guess you like

Origin blog.csdn.net/qq_38851184/article/details/113974013