用sklearn做一个完整的机器学习工程——以波士顿房价预测为例(二、select a model and train it)

终于到了这一步了!你在前面限定了问题、获得了数据、探索了数据、采样了一个测试集、写了自动化的转换流水线来清理和为算法准备数据。现在,早就已经准备好选择并训练一个机器学习模型了。

那我们就从线性模型开始讲起吧!

普通的广义线性模型,都是拟合一个带有系数 w = (w_1, ..., w_p) 的线性模型,使得数据集实际观测数据和预测数据(估计值)之间的残差平方和最小。其数学表达式为:

\underset{w}{min\,} {|| X w - y||_2}^2,当然如果是做分类的话,请参照李航的logistic回归

在sklearn中广义线性模型也特别好实现

from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(housing_prepared,housing_labels)
#输出权重
lin_reg.coef_
#去前6行数据进行评估
some_data = housing_prepared[:6]
some_labels = housing_labels[:6]
lin_reg.predict(some_data)#得到:

array([ 203682.37379543,  326371.39370781,  204218.64588245,
         58685.4770482 ,  194213.06443039,  156914.96268363])

实际值为 [ 286600.0,  340600.0,196900.0, 46300.0,254500.0,127900.0]相差有的大概差个20%左右吧

普通最小二乘法的缺点

对于普通最小二乘的系数估计问题,其依赖于模型各项的相互独立性。当各项是相关的,且设计矩阵 X 的各列近似线性相关,那么,设计矩阵会趋向于奇异矩阵,这会导致最小二乘估计对于随机误差非常敏感,产生很大的方差。例如,在没有实验设计的情况下收集到的数据,这种多重共线性(multicollinearity)的情况可能真的会出现,这就导致求的时候,逆矩阵无法计算或者只能用伪逆进行代替

伪逆矩阵是逆矩阵的广义形式。由于奇异矩阵或非方阵的矩阵不存在逆矩阵,但在matlab里可以用函数pinv(A)求其伪逆矩阵。基本语法为X=pinv(A),X=pinv(A,tol),其中tol为误差,pinv为pseudo-inverse的缩写:max(size(A))*norm(A)*eps。函数返回一个与A的转置矩阵A' 同型的矩阵X,并且满足:AXA=A,XAX=X.此时,称矩阵X为矩阵A的伪逆,也称为广义逆矩阵。pinv(A)具有inv(A)的部分特性,但不与inv(A)完全等同。  如果A为非奇异方阵,pinv(A)=inv(A),但却会耗费大量的计算时间,相比较而言,inv(A)花费更少的时间

一般这时候我们会用lasso或者岭回归的方法进行代替,lasso可以很好的克服没有逆矩阵的缺点,只要你把参数的λ设置的足够大,总能得到逆矩阵

具体的方式请看我的这篇博客https://blog.csdn.net/PythonstartL/article/details/82993166,关于线性回归的介绍

关于模型的评估 

对于回归问题,我们都一般采用均方差作为我们的评估

from sklearn.metrics import mean_squared_error
housing_predictions = lin_reg.predict(housing_prepared)
lin_mse = mean_squared_error(housing_predictions,housing_labels)
lin_rmse = np.sqrt(lin_mse)
lin_rmse

事实上sklearn在sklearn.metrics 模块中还有mean_absolute_error 、r2_score、对数均方误差等方法,如果想要了解更多请查看http://sklearn.apachecn.org/cn/0.19.0/modules/model_evaluation.html#regression-metrics

用决策树进行回归

from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor(random_state=42)
tree_reg.fit(housing_prepared,housing_labels)
housing_predictions = tree_reg.predict(housing_prepared)
tree_mse = mean_squared_error(housing_predictions,housing_labels)
tree_rmse = np.sqrt(tree_mse)
tree_rmse

发现极其容易过拟合!!!!

输出的tree_rmse =0??!!!

等一下,发生了什么?没有误差?这个模型可能是绝对完美的吗?当然,更大可能性是这个模型严重过拟合数据。如何确定呢?如前所述,直到你准备运行一个具备足够信心的模型,都不要碰测试集,因此你需要使用训练集的部分数据来做训练,用一部分来做模型验证。

交叉验证输出结果

from  sklearn.model_selection import cross_val_score
scores = cross_val_score(tree_reg,housing_prepared,housing_labels,scoring="neg_mean_squared_error", cv=10)
rmse_tree_score= np.sqrt(-scores)
def display(scores):
    print("Mean",scores.mean())
    print("Score",scores)
    print("Std",scores.std())
display(rmse_tree_score)
Mean 71006.1028738
Score [ 68692.62066314  66603.42278774  71443.25077938  69170.44729479
  71198.2811685   74702.47214489  70143.54705527  70068.33224653
  76934.29689947  71104.35769791]
Std 2806.6555643

随机森林进行回归

from sklearn.ensemble import  RandomForestRegressor
forest_reg = RandomForestRegressor()
forest_reg.fit(housing_prepared,housing_labels)
housing_predictions = forest_reg.predict(housing_prepared)
forest_mse = mean_squared_error(housing_predictions,housing_labels)
forest_rmse = np.sqrt(forest_mse)

scores = cross_val_score(forest_reg,housing_prepared,housing_labels,scoring="neg_mean_squared_error", cv=10)
forest_score= np.sqrt(-scores)
Mean 52621.9422229
Score [ 53434.78319574  50013.09969597  51698.41451493  55233.03040476
  52153.01653749  55646.87294387  50581.67071852  50345.47129102
  55304.55575292  51808.5071742 ]
Std 2042.31906196

均值52621.94

GBDT进行回归

from sklearn.ensemble import  GradientBoostingRegressor
params = {'n_estimators': 500, 'max_depth': 4, 'min_samples_split': 2,
          'learning_rate': 0.01, 'loss': 'ls'}
Gradi_reg = GradientBoostingRegressor(**params).fit(housing_prepared,housing_labels.values)
housing_predictions = Gradi_reg.predict(housing_prepared)
housing_predictions = Gradi_reg.predict(housing_prepared)
scores = cross_val_score(Gradi_reg,housing_prepared,housing_labels.values,scoring="neg_mean_squared_error", cv=10)
grad_score= np.sqrt(-scores)
display(grad_score)
Mean 53514.5564254
Score [ 52500.6202355   50204.3683395   53594.11342981  55637.44853591
  53216.100642    56524.13781574  51136.24319508  51533.31356931
  57062.35994375  53736.85854696]
Std 2187.76354081

均值为53514.55

Xgboost 进行回归

import xgboost as xgb
params = {'learning_rate': 0.1, 'n_estimators': 500, 'max_depth': 5, 'min_child_weight': 1, 'seed': 0,
                    'subsample': 0.8, 'colsample_bytree': 0.8, 'gamma': 0, 'reg_alpha': 0, 'reg_lambda': 1}
xgb_reg = xgb.XGBRegressor(**params).fit(housing_prepared,housing_labels.values)
housing_predictions = xgb_reg.predict(housing_prepared)
scores = cross_val_score(xgb_reg,housing_prepared,housing_labels.values,scoring="neg_mean_squared_error", cv=10)
grad_score= np.sqrt(-scores)
display(grad_score)
Mean 45764.4701194
Score [ 45435.02275225  44024.3862194   44157.68301776  46826.87671433
  46258.32359092  49159.05496956  43942.98391701  44748.54214173
  47756.72390414  45335.10396721]
Std 1646.73463991

均值为45764.47

模型的准确度又高了!方差也进一步下降了。 下一步我将介绍集成学习的思想和怎么进行调参,感觉又忘了集成学习了。Sad!!!!

猜你喜欢

转载自blog.csdn.net/PythonstartL/article/details/82991548
今日推荐