机器学习模型性能衡量指标(回归)以及Python实现

机器学习模型性能衡量指标(回归)以及Python实现

平均绝对误差(Mean Absolute Error, MAE)

M A E = 1 m ∑ i = 1 m ∣ ( y i − y i ^ ) ∣ MAE = \frac{1}{m} \sum_{i=1}^m |(y_i - \hat{y_i})| MAE=m1i=1m(yiyi^)
其中, y i − y i ^ y_i - \hat{y_i} yiyi^ 为测试集上真实值-预测值。

MSE 均方误差(Mean Squared Error, MSE)

M S E = 1 m ∑ i = 1 m ( y i − y i ^ ) 2 MSE = \frac{1}{m} \sum_{i=1}^m (y_i - \hat{y_i})^2 MSE=m1i=1m(yiyi^)2

RMSE 均方根误差(Mean Squared Error, RMSE)

R M S E = 1 m ∑ i = 1 m ( y i − y i ^ ) 2 RMSE = \sqrt{\frac{1}{m} \sum_{i=1}^m (y_i - \hat{y_i})^2} RMSE=m1i=1m(yiyi^)2
可以看出, R M S E = s q r t ( M S E ) RMSE=sqrt(MSE) RMSE=sqrt(MSE)
以上各指标,根据不同业务,会有不同的值大小,不具有可读性,因此还可以使用以下方式进行评测。

R2(R-Square)

R 2 = 1 − ∑ i ( y i − y i ^ ) 2 ∑ i ( y i − y i ‾ ) 2 R_2 = 1 - \frac{\sum_{i} (y_i - \hat{y_i})^2}{\sum_{i} (y_i - \overline{y_i})^2} R2=1i(yiyi)2i(yiyi^)2

其中,分子部分表示真实值与预测值的平方差之和,类似于均方差 MSE;分母部分表示真实值与均值的平方差之和,类似于方差 Var。

根据 R-Squared 的取值,来判断模型的好坏,其取值范围为[0,1]:

  • 如果结果是 0,说明模型拟合效果很差;
  • 如果结果是 1,说明模型无错误。

一般来说,R-Squared 越大,表示模型拟合效果越好。R-Squared 反映的是大概有多准,因为,随着样本数量的增加,R-Square必然增加,无法真正定量说明准确程度,只能大概定量。

校正决定系数(Adjust R-Square)

R 2 _ A d j u s t e d = 1 − ( 1 − R 2 ) ( n − 1 ) n − p − 1 R_2\_{Adjusted}= 1 - \frac{(1-R^2)(n-1)}{n-p-1} R2_Adjusted=1np1(1R2)(n1)

其中,n 是样本数量,p 是特征数量。

Adjusted R-Square 抵消样本数量对 R-Square的影响,做到了真正的 0~1,越大越好。

下面以sklearn中的房价预测为例:

from sklearn.datasets import load_boston
boston = load_boston()
print(boston.DESCR)
.. _boston_dataset:

Boston house prices dataset
---------------------------

**Data Set Characteristics:**  

    :Number of Instances: 506 

    :Number of Attributes: 13 numeric/categorical predictive. Median Value (attribute 14) is usually the target.

    :Attribute Information (in order):
        - CRIM     per capita crime rate by town
        - ZN       proportion of residential land zoned for lots over 25,000 sq.ft.
        - INDUS    proportion of non-retail business acres per town
        - CHAS     Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
        - NOX      nitric oxides concentration (parts per 10 million)
        - RM       average number of rooms per dwelling
        - AGE      proportion of owner-occupied units built prior to 1940
        - DIS      weighted distances to five Boston employment centres
        - RAD      index of accessibility to radial highways
        - TAX      full-value property-tax rate per $10,000
        - PTRATIO  pupil-teacher ratio by town
        - B        1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
        - LSTAT    % lower status of the population
        - MEDV     Median value of owner-occupied homes in $1000's

    :Missing Attribute Values: None

    :Creator: Harrison, D. and Rubinfeld, D.L.

This is a copy of UCI ML housing dataset.
https://archive.ics.uci.edu/ml/machine-learning-databases/housing/


This dataset was taken from the StatLib library which is maintained at Carnegie Mellon University.

The Boston house-price data of Harrison, D. and Rubinfeld, D.L. 'Hedonic
prices and the demand for clean air', J. Environ. Economics & Management,
vol.5, 81-102, 1978.   Used in Belsley, Kuh & Welsch, 'Regression diagnostics
...', Wiley, 1980.   N.B. Various transformations are used in the table on
pages 244-261 of the latter.

The Boston house-price data has been used in many machine learning papers that address regression
problems.   
     
.. topic:: References

   - Belsley, Kuh & Welsch, 'Regression diagnostics: Identifying Influential Data and Sources of Collinearity', Wiley, 1980. 244-261.
   - Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann.
# 加载数据
from sklearn.model_selection import train_test_split, StratifiedKFold
import numpy as np

X = boston.data
y = boston.target

print(X.shape)
# 随机采样25%
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=33, test_size=0.25)
print('max value:{}'.format(np.max(boston.target)))
print('min value:{}'.format(np.min(boston.target)))
print('ave value:{}'.format(np.mean(boston.target)))
(506, 13)
max value:50.0
min value:5.0
ave value:22.532806324110677
# 标准化数据
from sklearn.preprocessing import StandardScaler

ss_X = StandardScaler()
ss_y = StandardScaler()

X_train = ss_X.fit_transform(X_train)
X_test = ss_X.transform(X_test)
y_train = ss_y.fit_transform(y_train.reshape(-1, 1))
y_test = ss_y.transform(y_test.reshape(-1, 1))
# 加载模型训练
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(X_train, y_train.ravel())
lr_y_predict = lr.predict(X_test)

from sklearn.linear_model import SGDRegressor
sgdr = SGDRegressor()
sgdr.fit(X_train, y_train.ravel())

sgdr_y_predict = sgdr.predict(X_test)
# 模型评价
from sklearn.metrics import mean_squared_error # MSE
from sklearn.metrics import mean_absolute_error # MAE
from sklearn.metrics import r2_score # R-Square

y_test = ss_y.inverse_transform(y_test)
lr_y_predict = ss_y.inverse_transform(lr_y_predict)
sgdr_y_predict = ss_y.inverse_transform(sgdr_y_predict)

# lr
print("\nEvaluation of LinearRegression:")
# MSE:
print("MSE:{}".format(mean_squared_error(y_test, lr_y_predict)))
# RMSE:
print("RMSE:{}".format(np.sqrt(mean_squared_error(y_test, lr_y_predict))))
# MAE:
print("MAE:{}".format(mean_absolute_error(y_test, lr_y_predict)))
# R2:
print("r2_score:{}".format(r2_score(y_test, lr_y_predict)))
# Adjusted_R2:
n = X.shape[0]
p = X.shape[1]
print("r2_adjusted:{}".format(1-((1-r2_score(y_test, lr_y_predict))*(n-1))/(n-p-1))) 
      
# sgdr
print("\nEvaluation of SGDRegressor:")
# MSE:
print("MSE:{}".format(mean_squared_error(y_test, sgdr_y_predict)))
# RMSE:
print("RMSE:{}".format(np.sqrt(mean_squared_error(y_test, sgdr_y_predict))))
# MAE:
print("MAE:{}".format(mean_absolute_error(y_test, sgdr_y_predict)))
# R2:
print("r2_score:{}".format(r2_score(y_test, sgdr_y_predict)))
# Adjusted_R2:
n = X.shape[0]
p = X.shape[1]
print("r2_adjusted:{}".format(1-((1-r2_score(y_test, sgdr_y_predict))*(n-1))/(n-p-1)))       
Evaluation of LinearRegression:
MSE:25.139236520353442
RMSE:5.013904319026585
MAE:3.532532543705398
r2_score:0.6757955014529482
r2_adjusted:0.6672291224262985

Evaluation of SGDRegressor:
MSE:26.091709280756238
RMSE:5.1080044323352185
MAE:3.510020265729822
r2_score:0.6635120753665551
r2_adjusted:0.6546211342685169

猜你喜欢

转载自blog.csdn.net/qq_40326280/article/details/112488026