Machine Learning 14--审查回归算法

本章将介绍7种回归算法,其中线性算法4种(线性回归、岭回归、套索回归、弹性网格)和三种非线性算法(K近邻、分类与回归树、支持向量机)。

接下来将使用波士顿房价的数据集来审查回归算法,采用10折交叉验证来分离数据,并应用到所有算法上,还会通过均方误差来评估算法模型,如scikit-learn中的cross_val_score()。


线性回归算法

线性回归算法是利用数理统计中的回归分析,来确定两种或两种以上的变量间相互依赖的定量关系的一种统计分析方法。

可以表示为y=w'x+e,e是误差服从均值为0的正态分布。

在回归分析中,只包括一个自变量和一个因变量,且二者的关系可用一条直线近似表示,这种回归分析称为一元线性回归分析。如果回归分析中包括两个或两个以上的自变量,且因变量和自变量之间是线性关系,则称多元线性回归分析。

在scikit-learn中实现线性回归算法的是LinearRegression类。

 1 #Linear Regression
 2 from pandas import read_csv
 3 from sklearn.model_selection import KFold
 4 from sklearn.model_selection import cross_val_score
 5 from sklearn.linear_model import LinearRegression
 6 
 7 filename='/home/aistudio/work/housing.csv'
 8 names=['CRIM','ZN','INDUS','CHAS','NOX','RM','AGE','DIS','RAD','TAX','PRTATIO','B','LSTAT','MEDV']
 9 data=read_csv(filename,names=names,delim_whitespace=True)
10 array=data.values
11 
12 x=array[:,0:13]
13 y=array[:,13]
14 n_splits=10
15 seed=7
16 kfold=KFold(n_splits,random_state=seed)
17 model=LinearRegression()
18 scoring='neg_mean_squared_error'
19 result=cross_val_score(model,x,y,cv=kfold,scoring=scoring)
20 print('Linear Regression: %.3f' % result.mean())
 
Linear Regression: -34.705

(本代码实例与前面介绍的均方误差代码实例很相似)

岭回归算法

岭回归算法是一种专门用于共线性数据分析的有偏估计回归方法,实际上是一种改良的最小二乘估计法,通过放弃最小二乘法的无偏性,以损失部分信息,降低精度为代价,获得回归系数更符合实际、更可靠的回归方法,对病态数据拟合要强于最小二乘法。
在scikit-learn中实现岭回归算法的是Ridge类。
 1 #Ridge Regression
 2 from pandas import read_csv
 3 from sklearn.model_selection import KFold
 4 from sklearn.model_selection import cross_val_score
 5 from sklearn.linear_model import Ridge
 6 
 7 filename='/home/aistudio/work/housing.csv'
 8 names=['CRIM','ZN','INDUS','CHAS','NOX','RM','AGE','DIS','RAD','TAX','PRTATIO','B','LSTAT','MEDV']
 9 data=read_csv(filename,names=names,delim_whitespace=True)
10 array=data.values
11 
12 x=array[:,0:13]
13 y=array[:,13]
14 n_splits=10
15 seed=7
16 kfold=KFold(n_splits,random_state=seed)
17 model=Ridge()
18 scoring='neg_mean_squared_error'
19 result=cross_val_score(model,x,y,cv=kfold,scoring=scoring)
20 print('Ridge Regression: %.3f' % result.mean())
 
Ridge Regression: -34.078

套索回归算法

套索回归算法和岭回归算法类似,套索回归算法也会惩罚回归系数,也就是惩罚回归系数的绝对值大小。此外,他能够减小变化程度并提高线性回归模型的精度。
与岭回归算法有一点不同,他使用的惩罚系数是绝对值,而不是平方,这导致惩罚(或等于约束估计的绝对值之和)值使一些参数估计结果等于0.使用惩罚值越大,进一步估计会使缩小值越趋于0.如从给定的n个变量中选择变量,如果预测的一组变量高度相似,套索回归算法会选择其中一个变量,并将其他的变量收缩为0.
在scikit-learn中实现类是Lasso。
 1 #套索回归
 2 from pandas import read_csv
 3 from sklearn.model_selection import KFold
 4 from sklearn.model_selection import cross_val_score
 5 from sklearn.linear_model import Lasso
 6 
 7 filename='/home/aistudio/work/housing.csv'
 8 names=['CRIM','ZN','INDUS','CHAS','NOX','RM','AGE','DIS','RAD','TAX','PRTATIO','B','LSTAT','MEDV']
 9 data=read_csv(filename,names=names,delim_whitespace=True)
10 array=data.values
11 
12 x=array[:,0:13]
13 y=array[:,13]
14 n_splits=10
15 seed=7
16 kfold=KFold(n_splits,random_state=seed)
17 model=Lasso()
18 scoring='neg_mean_squared_error'
19 result=cross_val_score(model,x,y,cv=kfold,scoring=scoring)
20 print('Lasso Regression: %.3f' % result.mean())
 
Lasso Regression: -34.464

弹性网络回归算法

弹性网络回归算法是套索回归和岭回归的混合体,在模型训练时,弹性网络回归算法综合使用L1和L2两种正则化方法。当有多个相关联的特征时,此算法是很有用的,但套索算法会随机挑选算法中的一个,而弹性网络回归算法则会选择两个。
与套索回归和岭回归相比,弹性回归的优点是,它允许弹性网络回归继承循环状态下岭回归的一些稳定性。另外,在高度相关变量的情况下,它会产生群体效应;选择变量的数目没有限制,可以承受双重收缩。
在scikit-learn中实现类是ElasticNet。
 1 #弹性网络回归
 2 from pandas import read_csv
 3 from sklearn.model_selection import KFold
 4 from sklearn.model_selection import cross_val_score
 5 from sklearn.linear_model import ElasticNet
 6 
 7 filename='/home/aistudio/work/housing.csv'
 8 names=['CRIM','ZN','INDUS','CHAS','NOX','RM','AGE','DIS','RAD','TAX','PRTATIO','B','LSTAT','MEDV']
 9 data=read_csv(filename,names=names,delim_whitespace=True)
10 array=data.values
11 
12 x=array[:,0:13]
13 y=array[:,13]
14 n_splits=10
15 seed=7
16 kfold=KFold(n_splits,random_state=seed)
17 model=ElasticNet()
18 scoring='neg_mean_squared_error'
19 result=cross_val_score(model,x,y,cv=kfold,scoring=scoring)
20 print('ElasticNet Regression: %.3f' % result.mean())
 
ElasticNet Regression: -31.165

非线性算法

这三种算法在分类中也存在。

K近邻算法
K近邻算法是按照距离来预测结果。
在scikit-learn中K近邻算法的实现类是KNeighboursRegressor。默认的距离参数为闵式距离,也可以指定曼哈顿距离作为距离的计算方式。
 1 #K近邻算法
 2 from pandas import read_csv
 3 from sklearn.model_selection import KFold
 4 from sklearn.model_selection import cross_val_score
 5 from sklearn.neighbors import KNeighborsRegressor
 6 
 7 filename='/home/aistudio/work/housing.csv'
 8 names=['CRIM','ZN','INDUS','CHAS','NOX','RM','AGE','DIS','RAD','TAX','PRTATIO','B','LSTAT','MEDV']
 9 data=read_csv(filename,names=names,delim_whitespace=True)
10 array=data.values
11 
12 x=array[:,0:13]
13 y=array[:,13]
14 n_splits=10
15 seed=7
16 kfold=KFold(n_splits,random_state=seed)
17 model=KNeighborsRegressor()
18 scoring='neg_mean_squared_error'
19 result=cross_val_score(model,x,y,cv=kfold,scoring=scoring)
20 print('KNeighbors Regression: %.3f' % result.mean())
 
KNeighbors Regression: -107.287

分类与回归树

与分类中的代码类似,在scikit-learn中分类与回归树的实现类是DecisionTreeRegressor。
 1 #分类与回归树
 2 from pandas import read_csv
 3 from sklearn.model_selection import KFold
 4 from sklearn.model_selection import cross_val_score
 5 from sklearn.tree import DecisionTreeRegressor
 6 
 7 filename='/home/aistudio/work/housing.csv'
 8 names=['CRIM','ZN','INDUS','CHAS','NOX','RM','AGE','DIS','RAD','TAX','PRTATIO','B','LSTAT','MEDV']
 9 data=read_csv(filename,names=names,delim_whitespace=True)
10 array=data.values
11 
12 x=array[:,0:13]
13 y=array[:,13]
14 n_splits=10
15 seed=7
16 kfold=KFold(n_splits,random_state=seed)
17 model=DecisionTreeRegressor()
18 scoring='neg_mean_squared_error'
19 result=cross_val_score(model,x,y,cv=kfold,scoring=scoring)
20 print('CART Regression: %.3f' % result.mean())
CART Regression: -38.303
(每次运行结果有差异)

支持向量机

支持向量机的算法在上一章中也介绍过,它同样可以用来处理回归问题。
在scikit-learn中处理回归问题的实现类是SVR。
 1 #支持向量机
 2 from pandas import read_csv
 3 from sklearn.model_selection import KFold
 4 from sklearn.model_selection import cross_val_score
 5 from sklearn.svm import SVR
 6 
 7 filename='/home/aistudio/work/housing.csv'
 8 names=['CRIM','ZN','INDUS','CHAS','NOX','RM','AGE','DIS','RAD','TAX','PRTATIO','B','LSTAT','MEDV']
 9 data=read_csv(filename,names=names,delim_whitespace=True)
10 array=data.values
11 
12 x=array[:,0:13]
13 y=array[:,13]
14 n_splits=10
15 seed=7
16 kfold=KFold(n_splits,random_state=seed)
17 model=SVR()
18 scoring='neg_mean_squared_error'
19 result=cross_val_score(model,x,y,cv=kfold,scoring=scoring)
20 print('SVM: %.3f' % result.mean())
SVM: -91.048

在scikit-learn中,算法评估矩阵的计算经常使用cross-val-score函数,通过指定scoring参数来选择使用不同的评估矩阵。
scoring参数如下图所示:
Scoring Function Comment
Classification    
Accuracy metrics.accuracy_score  
average_precision metrics.average_precision_score  
f1 metrics.f1_score for binary targets
f1_micro metrics.f1_score micro-averaged
f1_macro metrics.f1_score macro-averaged
f1_weighted metrics.f1_score weighted averaged
f1_samples metrics.f1_score by multilabel sample
neg_log_loss metrics.log_loss requires predict_proba support
precision etc. metrics.precision_score suffixes apply as with f1
recall etc. metrics.recall_score suffixes apply as with f1
roc_auc metrics.roc_auc_score  
Clustering    
adjusted_mutual_info_score metrics.adjusted_mutual_info_score  
adjusted_rand_score metrics.adjusted_rand_score  
completeness_score metrics.completeness_score  
fowlkes_mallows_score metrics.fowlkes_mallows_score  
homogeneity_score metrics.homogeneity_score  
mutual_info_score metrics.mutal_info_score  
normalized_mutual_info_score metrics.normalized_mutual_info_score  
v_measure_score metrics.v_measure_score  
Regression    
explained_variance metrics.explained_variance_score  
neg_mean_absolute_error metrics.mean_absolute_error  
neg_mean_squared_error metrics.mean_squared_error  
neg_mean_squared_log_error metrics.mean_squared_log_error  
neg_median_absolute_error metrics.median_absolute_error  
r2 metrics.r2_score  
     


猜你喜欢

转载自www.cnblogs.com/yuzaihuan/p/12886285.html
今日推荐