Classification and Prediction Algorithm Evaluation
The accuracy of classification and prediction models to predict the training set to derive and can not react well to future performance prediction model, in order to effectively judge the performance of a predictive model, you need a group does not participate in data collection predictive model established, and evaluation of the accuracy of the prediction model on the data set, the set of independent data sets is called the test set.
Evaluation models predict, so some of the commonly used indicators to measure;
- Relative / absolute error (E absolute error: Absolute Error, e is the relative error Relative Error)
- The average absolute error (Mean Absolute Error, MAE)
- Mean square error (Mean Squared Error, MSE)
- RMSE (Root Mean Squared Error, RMSE)
- Mean absolute percentage error (Mean Absolution Percentage Error, MAPE)
- Kappa statistics
- Recognition accuracy (Accuracy, by TP, TN, FP, FN calculated)
- Recognition accuracy ratio (Precision)
- Recall (Recall)
- ……
Two good reference articles:
https://baijiahao.baidu.com/s?id=1603857666277651546&wfr=spider&for=pc