二手车数据挖掘- 建模调参

Datawhale 零基础入门数据挖掘-Task4 建模调参

四、建模与调参

Tip:此部分为零基础入门数据挖掘的 Task4 建模调参 部分,带你来了解各种模型以及模型的评价和调参策略,欢迎大家后续多多交流。

赛题:零基础入门数据挖掘 - 二手车交易价格预测

地址:https://tianchi.aliyun.com/competition/entrance/231784/introduction?spm=5176.12281957.1004.1.38b02448ausjSX

5.1 学习目标

  • 了解常用的机器学习模型,并掌握机器学习模型的建模与调参流程
  • 完成相应学习打卡任务

5.2 内容介绍

  1. 线性回归模型:
    • 线性回归对于特征的要求;
    • 处理长尾分布;
    • 理解线性回归模型;
  2. 模型性能验证:
    • 评价函数与目标函数;
    • 交叉验证方法;
    • 留一验证方法;
    • 针对时间序列问题的验证;
    • 绘制学习率曲线;
    • 绘制验证曲线;
  3. 嵌入式特征选择:
    • Lasso回归;
    • Ridge回归;
    • 决策树;
  4. 模型对比:
    • 常用线性模型;
    • 常用非线性模型;
  5. 模型调参:
    • 贪心调参方法;
    • 网格调参方法;
    • 贝叶斯调参方法;

5.3 相关原理介绍与推荐

由于相关算法原理篇幅较长,本文推荐了一些博客与教材供初学者们进行学习。

5.3.1 线性回归模型

https://zhuanlan.zhihu.com/p/49480391

5.3.2 决策树模型

https://zhuanlan.zhihu.com/p/65304798

5.3.3 GBDT模型

https://zhuanlan.zhihu.com/p/45145899

5.3.4 XGBoost模型

https://zhuanlan.zhihu.com/p/86816771

5.3.5 LightGBM模型

https://zhuanlan.zhihu.com/p/89360721

5.3.6 推荐教材:

5.4 代码示例

5.4.1 读取数据

import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('ignore')

reduce_mem_usage 函数通过调整数据类型,帮助我们减少数据在内存中占用的空间

def reduce_mem_usage(df):
    """ iterate through all the columns of a dataframe and modify the data type
        to reduce memory usage.        
    """
    start_mem = df.memory_usage().sum() 
    print('Memory usage of dataframe is {:.2f} MB'.format(start_mem))
    
    for col in df.columns:
        col_type = df[col].dtype
        
        if col_type != object:
            c_min = df[col].min()
            c_max = df[col].max()
            if str(col_type)[:3] == 'int':
                if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
                    df[col] = df[col].astype(np.int8)
                elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
                    df[col] = df[col].astype(np.int16)
                elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
                    df[col] = df[col].astype(np.int32)
                elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
                    df[col] = df[col].astype(np.int64)  
            else:
                if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
                    df[col] = df[col].astype(np.float16)
                elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
                    df[col] = df[col].astype(np.float32)
                else:
                    df[col] = df[col].astype(np.float64)
        else:
            df[col] = df[col].astype('category')

    end_mem = df.memory_usage().sum() 
    print('Memory usage after optimization is: {:.2f} MB'.format(end_mem))
    print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))
    return df
sample_feature = reduce_mem_usage(pd.read_csv('data_for_tree.csv'))
Memory usage of dataframe is 57322784.00 MB
Memory usage after optimization is: 14755178.00 MB
Decreased by 74.3%
sample_feature.head()
name model brand bodyType fuelType gearbox power kilometer notRepairedDamage price ... used_time city brand_amount brand_price_max brand_price_median brand_price_min brand_price_sum brand_price_std brand_price_average power_bin
0 736 30 6 1.0 0.0 0.0 60 12.5 0.0 1850.0 ... 4384.0 1.0 10192.0 35990.0 1800.0 13.0 36457520.0 4564.0 3576.0 5.0
1 2262 40 1 2.0 0.0 0.0 0 15.0 MISSING 3600.0 ... 4756.0 4.0 13656.0 84000.0 6400.0 15.0 124044600.0 8992.0 9080.0 NaN
2 14874 115 15 1.0 0.0 0.0 163 12.5 0.0 6222.0 ... 4384.0 2.0 1458.0 45000.0 8496.0 100.0 14373814.0 5424.0 9848.0 16.0
3 71865 109 10 0.0 0.0 1.0 193 15.0 0.0 2400.0 ... 7124.0 NaN 13992.0 92900.0 5200.0 15.0 113034208.0 8248.0 8076.0 19.0
4 111080 110 5 1.0 0.0 0.0 68 5.0 0.0 5200.0 ... 1531.0 6.0 4664.0 31500.0 2300.0 20.0 15414322.0 3344.0 3306.0 6.0

5 rows × 36 columns

continuous_feature_names = [x for x in sample_feature.columns if x not in ['price','brand','model','name', 'bodyType', 'fuelType', 'notRepairedDamage']]
continuous_feature_names
['gearbox',
 'power',
 'kilometer',
 'v_0',
 'v_1',
 'v_2',
 'v_3',
 'v_4',
 'v_5',
 'v_6',
 'v_7',
 'v_8',
 'v_9',
 'v_10',
 'v_11',
 'v_12',
 'v_13',
 'v_14',
 'train',
 'used_time',
 'city',
 'brand_amount',
 'brand_price_max',
 'brand_price_median',
 'brand_price_min',
 'brand_price_sum',
 'brand_price_std',
 'brand_price_average',
 'power_bin']

5.4.2 线性回归 & 五折交叉验证 & 模拟真实业务情况

sample_feature = sample_feature.dropna().replace('-', 0).reset_index(drop=True)
sample_feature = sample_feature.replace('MISSING', 0)
print(sample_feature.head())
sample_feature['notRepairedDamage'] = sample_feature['notRepairedDamage'].astype(np.float32)
train = sample_feature[continuous_feature_names + ['price']]

train_X = train[continuous_feature_names]
train_y = train['price']
     name model  brand bodyType fuelType gearbox  power  kilometer  \
0     736    30      6      1.0      0.0     0.0     60       12.5   
1   14874   115     15      1.0      0.0     0.0    163       12.5   
2  111080   110      5      1.0      0.0     0.0     68        5.0   
3  137642    24     10      0.0      1.0     0.0    109       10.0   
4    2402    13      4      0.0      0.0     1.0    150       15.0   

  notRepairedDamage   price  ...  used_time  city  brand_amount  \
0               0.0  1850.0  ...     4384.0   1.0       10192.0   
1               0.0  6222.0  ...     4384.0   2.0        1458.0   
2               0.0  5200.0  ...     1531.0   6.0        4664.0   
3               0.0  8000.0  ...     2482.0   3.0       13992.0   
4               0.0  3500.0  ...     6184.0   3.0       16576.0   

   brand_price_max  brand_price_median  brand_price_min  brand_price_sum  \
0          35990.0              1800.0             13.0       36457520.0   
1          45000.0              8496.0            100.0       14373814.0   
2          31500.0              2300.0             20.0       15414322.0   
3          92900.0              5200.0             15.0      113034208.0   
4          99999.0              6000.0             12.0      138279072.0   

   brand_price_std  brand_price_average  power_bin  
0           4564.0               3576.0        5.0  
1           5424.0               9848.0       16.0  
2           3344.0               3306.0        6.0  
3           8248.0               8076.0       10.0  
4           8088.0               8344.0       14.0  

[5 rows x 36 columns]

5.4.2 - 1 简单建模

from sklearn.linear_model import LinearRegression
model = LinearRegression(normalize=True)
model = model.fit(train_X, train_y)

查看训练的线性回归模型的截距(intercept)与权重(coef)

print('intercept:'+ str(model.intercept_))
sorted(dict(zip(continuous_feature_names, model.coef_)).items(), key=lambda x:x[1], reverse=True)
intercept:-763121.415400561





[('v_6', 3418891.8661930403),
 ('v_5', 2297424.5932109547),
 ('v_8', 1219155.1076147154),
 ('v_9', 689318.2798646946),
 ('v_7', 441213.76855152246),
 ('v_11', 33695.21532491009),
 ('v_12', 12751.353677724243),
 ('v_10', 11940.051759841095),
 ('gearbox', 915.6123144165624),
 ('v_14', 232.73304565903075),
 ('city', 38.480371412194856),
 ('power', 35.6062576498682),
 ('brand_price_median', 0.45503872387225264),
 ('brand_price_std', 0.45190937612308135),
 ('brand_amount', 0.18847929674706093),
 ('brand_price_max', 0.005052224552260477),
 ('train', 3.818422555923462e-07),
 ('brand_price_sum', -3.0058302585516643e-05),
 ('used_time', -0.02462480273046145),
 ('brand_price_average', -0.34643690063272586),
 ('brand_price_min', -2.5319297835051704),
 ('power_bin', -109.76386262213285),
 ('v_13', -150.7399294462697),
 ('kilometer', -351.36103146189294),
 ('v_3', -1344.0675611153686),
 ('v_0', -2798.561647500838),
 ('v_4', -3853.612550388567),
 ('v_2', -39428.5597862382),
 ('v_1', -44599.82891284008)]
from matplotlib import pyplot as plt
subsample_index = np.random.randint(low=0, high=len(train_y), size=50)

绘制特征v_9的值与标签的散点图,图片发现模型的预测结果(蓝色点)与真实标签(黑色点)的分布差异较大,且部分预测值出现了小于0的情况,说明我们的模型存在一些问题

plt.scatter(train_X['v_9'][subsample_index], train_y[subsample_index], color='black')
plt.scatter(train_X['v_9'][subsample_index], model.predict(train_X.loc[subsample_index]), color='blue')
plt.xlabel('v_9')
plt.ylabel('price')
plt.legend(['True Price','Predicted Price'],loc='upper right')
print('The predicted price is obvious different from true price')
plt.show()
The predicted price is obvious different from true price

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-FKOrDwDL-1585643127040)(output_24_1.png)]

通过作图我们发现数据的标签(price)呈现长尾分布,不利于我们的建模预测。原因是很多模型都假设数据误差项符合正态分布,而长尾分布的数据违背了这一假设。参考博客:https://blog.csdn.net/Noob_daniel/article/details/76087829

import seaborn as sns
print('It is clear to see the price shows a typical exponential distribution')
plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
sns.distplot(train_y)
plt.subplot(1,2,2)
sns.distplot(train_y[train_y < np.quantile(train_y, 0.9)])
It is clear to see the price shows a typical exponential distribution





<matplotlib.axes._subplots.AxesSubplot at 0x1833c28a4c8>

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-GBy1027Y-1585643127042)(output_26_2.png)]

在这里我们对标签进行了 l o g ( x + 1 ) log(x+1) 变换,使标签贴近于正态分布

train_y_ln = np.log(train_y + 1)
import seaborn as sns
print('The transformed price seems like normal distribution')
plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
sns.distplot(train_y_ln)
plt.subplot(1,2,2)
sns.distplot(train_y_ln[train_y_ln < np.quantile(train_y_ln, 0.9)])
The transformed price seems like normal distribution





<matplotlib.axes._subplots.AxesSubplot at 0x1833c60cd08>

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Gl8Igz8K-1585643127043)(output_29_2.png)]

model = model.fit(train_X, train_y_ln)

print('intercept:'+ str(model.intercept_))
sorted(dict(zip(continuous_feature_names, model.coef_)).items(), key=lambda x:x[1], reverse=True)
intercept:22.490527976445637





[('v_9', 6.73837719656898),
 ('v_1', 1.8764010743138013),
 ('v_12', 1.5490066205584243),
 ('v_5', 1.3684828986478352),
 ('v_13', 0.9381007016475442),
 ('v_11', 0.8601076136541934),
 ('v_3', 0.6908662876168846),
 ('v_7', 0.07176605184338732),
 ('power_bin', 0.009208120503260045),
 ('gearbox', 0.005832463904491905),
 ('power', 0.0004532988300577831),
 ('brand_price_min', 2.958178448217908e-05),
 ('used_time', 7.25708186886524e-06),
 ('brand_amount', 3.2317294039114087e-06),
 ('brand_price_median', 1.2316687308102237e-06),
 ('brand_price_max', 7.440945604426392e-07),
 ('brand_price_average', 6.077520449532623e-07),
 ('train', -1.2789769243681803e-11),
 ('brand_price_sum', -2.437300649289914e-10),
 ('brand_price_std', -4.2133697033978156e-07),
 ('v_14', -0.0003021985128370997),
 ('city', -0.003247381687803047),
 ('kilometer', -0.012962886393843829),
 ('v_0', -0.031397921158372956),
 ('v_2', -0.698190677552077),
 ('v_4', -0.8159958185074844),
 ('v_10', -1.5348138603344603),
 ('v_8', -42.38488913963534),
 ('v_6', -253.24942729281895)]

再次进行可视化,发现预测结果与真实值较为接近,且未出现异常状况

plt.scatter(train_X['v_9'][subsample_index], train_y[subsample_index], color='black')
plt.scatter(train_X['v_9'][subsample_index], np.exp(model.predict(train_X.loc[subsample_index])), color='blue')
plt.xlabel('v_9')
plt.ylabel('price')
plt.legend(['True Price','Predicted Price'],loc='upper right')
print('The predicted price seems normal after np.log transforming')
plt.show()
The predicted price seems normal after np.log transforming

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-SQtf8Bxb-1585643127045)(output_32_1.png)]

5.4.2 - 2 五折交叉验证

K折交叉验证

由于验证数据集不参与模型训练,当训练数据不够用时,预留大量的验证数据显得太奢侈。一种改善的方法是K折交叉验证(K-fold cross-validation)。在K折交叉验证中,我们把原始训练数据集分割成K个不重合的子数据集,然后我们做K次模型训练和验证。每一次,我们使用一个子数据集验证模型,并使用其他K-1个子数据集来训练模型。在这K次训练和验证中,每次用来验证模型的子数据集都不同。最后,我们对这K次训练误差和验证误差分别求平均。

在使用训练集对参数进行训练的时候,经常会发现人们通常会将一整个训练集分为三个部分(比如mnist手写训练集)。一般分为:训练集(train_set),评估集(valid_set),测试集(test_set)这三个部分。这其实是为了保证训练效果而特意设置的。其中测试集很好理解,其实就是完全不参与训练的数据,仅仅用来观测测试效果的数据。而训练集和评估集则牵涉到下面的知识了。

因为在实际的训练中,训练的结果对于训练集的拟合程度通常还是挺好的(初始条件敏感),但是对于训练集之外的数据的拟合程度通常就不那么令人满意了。因此我们通常并不会把所有的数据集都拿来训练,而是分出一部分来(这一部分不参加训练)对训练集生成的参数进行测试,相对客观的判断这些参数对训练集之外的数据的符合程度。这种思想就称为交叉验证(Cross Validation)

from sklearn.model_selection import cross_val_score
from sklearn.metrics import mean_absolute_error,  make_scorer ## MAE
def log_transfer(func):
    def wrapper(y, yhat):
        result = func(np.log(y), np.nan_to_num(np.log(yhat)))
        return result
    return wrapper
scores = cross_val_score(model, X=train_X, y=train_y, verbose=1, cv = 5, scoring=make_scorer(log_transfer(mean_absolute_error)))
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done   5 out of   5 | elapsed:    1.6s finished

使用线性回归模型,对未处理标签的特征数据进行五折交叉验证(Error 1.36)

print('AVG:', np.mean(scores))
AVG: 1.369013918691876

使用线性回归模型,对处理过标签的特征数据进行五折交叉验证(Error 0.19)

scores = cross_val_score(model, X=train_X, y=train_y_ln, verbose=1, cv = 5, scoring=make_scorer(mean_absolute_error))
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done   5 out of   5 | elapsed:    1.5s finished
print('AVG:', np.mean(scores))
AVG: 0.19576915594425995
scores = pd.DataFrame(scores.reshape(1,-1))
scores.columns = ['cv' + str(x) for x in range(1, 6)]
scores.index = ['MAE']
scores
cv1 cv2 cv3 cv4 cv5
MAE 0.194274 0.195956 0.195945 0.194693 0.197977

5.4.2 - 3 模拟真实业务情况

但在事实上,由于我们并不具有预知未来的能力,五折交叉验证在某些与时间相关的数据集上反而反映了不真实的情况。通过2018年的二手车价格预测2017年的二手车价格,这显然是不合理的,因此我们还可以采用时间顺序对数据集进行分隔。在本例中,我们选用靠前时间的4/5样本当作训练集,靠后时间的1/5当作验证集,最终结果与五折交叉验证差距不大

import datetime
sample_feature = sample_feature.reset_index(drop=True)
sample_feature
name model brand bodyType fuelType gearbox power kilometer notRepairedDamage price ... used_time city brand_amount brand_price_max brand_price_median brand_price_min brand_price_sum brand_price_std brand_price_average power_bin
0 736 30 6 1.0 0.0 0.0 60 12.5 0.0 1850.0 ... 4384.0 1.0 10192.0 35990.0 1800.0 13.0 36457520.0 4564.0 3576.0 5.0
1 14874 115 15 1.0 0.0 0.0 163 12.5 0.0 6222.0 ... 4384.0 2.0 1458.0 45000.0 8496.0 100.0 14373814.0 5424.0 9848.0 16.0
2 111080 110 5 1.0 0.0 0.0 68 5.0 0.0 5200.0 ... 1531.0 6.0 4664.0 31500.0 2300.0 20.0 15414322.0 3344.0 3306.0 6.0
3 137642 24 10 0.0 1.0 0.0 109 10.0 0.0 8000.0 ... 2482.0 3.0 13992.0 92900.0 5200.0 15.0 113034208.0 8248.0 8076.0 10.0
4 2402 13 4 0.0 0.0 1.0 150 15.0 0.0 3500.0 ... 6184.0 3.0 16576.0 99999.0 6000.0 12.0 138279072.0 8088.0 8344.0 14.0
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
96262 43073 42 1 1.0 0.0 0.0 122 3.0 0.0 14780.0 ... 1538.0 5.0 13656.0 84000.0 6400.0 15.0 124044600.0 8992.0 9080.0 12.0
96263 163978 121 10 4.0 0.0 1.0 163 15.0 0.0 5900.0 ... 5772.0 4.0 13992.0 92900.0 5200.0 15.0 113034208.0 8248.0 8076.0 16.0
96264 184535 116 11 0.0 0.0 0.0 125 10.0 0.0 9500.0 ... 2322.0 2.0 2944.0 34500.0 2900.0 30.0 13398006.0 4724.0 4548.0 12.0
96265 147587 60 11 1.0 1.0 0.0 90 6.0 0.0 7500.0 ... 2003.0 3.0 2944.0 34500.0 2900.0 30.0 13398006.0 4724.0 4548.0 8.0
96266 45907 34 10 3.0 1.0 0.0 156 15.0 0.0 4999.0 ... 3672.0 1.0 13992.0 92900.0 5200.0 15.0 113034208.0 8248.0 8076.0 15.0

96267 rows × 36 columns

split_point = len(sample_feature) // 5 * 4
train = sample_feature.loc[:split_point].dropna()
val = sample_feature.loc[split_point:].dropna()

train_X = train[continuous_feature_names]
train_y_ln = np.log(train['price'] + 1)
val_X = val[continuous_feature_names]
val_y_ln = np.log(val['price'] + 1)
model = model.fit(train_X, train_y_ln)
mean_absolute_error(val_y_ln, model.predict(val_X))
0.19796660363310997

5.4.2 - 4 绘制学习率曲线与验证曲线

from sklearn.model_selection import learning_curve, validation_curve
? learning_curve
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,n_jobs=1, train_size=np.linspace(.1, 1.0, 5 )):  
    plt.figure()  
    plt.title(title)  
    if ylim is not None:  
        plt.ylim(*ylim)  
    plt.xlabel('Training example')  
    plt.ylabel('score')  
    train_sizes, train_scores, test_scores = learning_curve(estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_size, scoring = make_scorer(mean_absolute_error))  
    train_scores_mean = np.mean(train_scores, axis=1)  
    train_scores_std = np.std(train_scores, axis=1)  
    test_scores_mean = np.mean(test_scores, axis=1)  
    test_scores_std = np.std(test_scores, axis=1)  
    plt.grid()#区域  
    plt.fill_between(train_sizes, train_scores_mean - train_scores_std,  
                     train_scores_mean + train_scores_std, alpha=0.1,  
                     color="r")  #填充train_mean +- train_scores_std
    plt.fill_between(train_sizes, test_scores_mean - test_scores_std,  
                     test_scores_mean + test_scores_std, alpha=0.1,  
                     color="g")  
    plt.plot(train_sizes, train_scores_mean, 'o-', color='r',  
             label="Training score")  
    plt.plot(train_sizes, test_scores_mean,'o-',color="g",  
             label="Cross-validation score")  
    plt.legend(loc="best")  
    return plt  
plot_learning_curve(LinearRegression(), 'Liner_model', train_X[:1000], train_y_ln[:1000], ylim=(0.0, 0.5), cv=5, n_jobs=1)  
<module 'matplotlib.pyplot' from 'C:\\Users\\94890\\Anaconda3\\lib\\site-packages\\matplotlib\\pyplot.py'>

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-QWrpDY6i-1585643127046)(output_57_1.png)]

5.4.3 多种模型对比

train = sample_feature[continuous_feature_names + ['price']].dropna()

train_X = train[continuous_feature_names]
train_y = train['price']
train_y_ln = np.log(train_y + 1)

5.4.3 - 1 线性模型 & 嵌入式特征选择

本章节默认,学习者已经了解关于过拟合、模型复杂度、正则化等概念。否则请寻找相关资料或参考如下连接:

在过滤式和包裹式特征选择方法中,特征选择过程与学习器训练过程有明显的分别。而嵌入式特征选择在学习器训练过程中自动地进行特征选择。嵌入式选择最常用的是L1正则化与L2正则化。在对线性回归模型加入两种正则化方法后,他们分别变成了岭回归与Lasso回归。

from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Ridge
from sklearn.linear_model import Lasso
models = [LinearRegression(),
          Ridge(),
          Lasso()]
result = dict()
for model in models:
    model_name = str(model).split('(')[0]
    scores = cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error))
    result[model_name] = scores
    print(model_name + ' is finished')
LinearRegression is finished
Ridge is finished
Lasso is finished

对三种方法的效果对比

result = pd.DataFrame(result)
result.index = ['cv' + str(x) for x in range(1, 6)]
result
LinearRegression Ridge Lasso
cv1 0.194274 0.199028 0.392064
cv2 0.195956 0.200631 0.389369
cv3 0.195945 0.200816 0.391919
cv4 0.194693 0.199294 0.386594
cv5 0.197977 0.202830 0.392358
model = LinearRegression().fit(train_X, train_y_ln)
print('intercept:'+ str(model.intercept_))
sns.barplot(abs(model.coef_), continuous_feature_names)
intercept:22.490527976546055





<matplotlib.axes._subplots.AxesSubplot at 0x1834b047d88>

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-4ahmCmwD-1585643127047)(output_68_2.png)]

L2正则化在拟合过程中通常都倾向于让权值尽可能小,最后构造一个所有参数都比较小的模型。因为一般认为参数值小的模型比较简单,能适应不同的数据集,也在一定程度上避免了过拟合现象。可以设想一下对于一个线性回归方程,若参数很大,那么只要数据偏移一点点,就会对结果造成很大的影响;但如果参数足够小,数据偏移得多一点也不会对结果造成什么影响,专业一点的说法是『抗扰动能力强』

model = Ridge().fit(train_X, train_y_ln)
print('intercept:'+ str(model.intercept_))
sns.barplot(abs(model.coef_), continuous_feature_names)
intercept:6.953548340458286





<matplotlib.axes._subplots.AxesSubplot at 0x18334871908>

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-qbLXqc5E-1585643127048)(output_70_2.png)]

L1正则化有助于生成一个稀疏权值矩阵,进而可以用于特征选择。如下图,我们发现power与userd_time特征非常重要。

model = Lasso().fit(train_X, train_y_ln)
print('intercept:'+ str(model.intercept_))
sns.barplot(abs(model.coef_), continuous_feature_names)
intercept:8.67070637212979





<matplotlib.axes._subplots.AxesSubplot at 0x183445fa988>

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-xqDNdsbM-1585643127049)(output_72_2.png)]

除此之外,决策树通过信息熵或GINI指数选择分裂节点时,优先选择的分裂特征也更加重要,这同样是一种特征选择的方法。XGBoost与LightGBM模型中的model_importance指标正是基于此计算的

5.4.3 - 2 非线性模型

除了线性模型以外,还有许多我们常用的非线性模型如下,在此篇幅有限不再一一讲解原理。我们选择了部分常用模型与线性模型进行效果比对。

from sklearn.linear_model import LinearRegression
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.neural_network import MLPRegressor
from xgboost.sklearn import XGBRegressor
from lightgbm.sklearn import LGBMRegressor
models = [LinearRegression(),
          DecisionTreeRegressor(),
          RandomForestRegressor(),
          GradientBoostingRegressor(),
          MLPRegressor(solver='lbfgs', max_iter=100), 
          XGBRegressor(n_estimators = 100, objective='reg:squarederror'), 
          LGBMRegressor(n_estimators = 100)]
result = dict()
del train_X['gearbox']
for model in models:
    model_name = str(model).split('(')[0]
    scores = cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error))
    result[model_name] = scores
    print(model_name + ' is finished')
LinearRegression is finished
DecisionTreeRegressor is finished
RandomForestRegressor is finished
GradientBoostingRegressor is finished
MLPRegressor is finished
XGBRegressor is finished
LGBMRegressor is finished
result = pd.DataFrame(result)
result.index = ['cv' + str(x) for x in range(1, 6)]
result
LinearRegression DecisionTreeRegressor RandomForestRegressor GradientBoostingRegressor MLPRegressor XGBRegressor LGBMRegressor
cv1 0.194302 0.197484 0.147026 0.177392 116.598407 0.143866 0.147577
cv2 0.196005 0.205765 0.148228 0.179418 67.139661 0.147769 0.150367
cv3 0.195943 0.200433 0.149695 0.179679 26.075124 0.146292 0.148571
cv4 0.194723 0.197947 0.147885 0.176486 248.037378 0.144756 0.148429
cv5 0.197991 0.202071 0.151757 0.181362 244.320036 0.148483 0.152119

可以看到随机森林模型在每一个fold中均取得了更好的效果

5.4.4 模型调参

在此我们介绍了三种常用的调参方法如下:

## LGB的参数集合:

objective = ['regression', 'regression_l1', 'mape', 'huber', 'fair']

num_leaves = [3,5,10,15,20,40, 55]
max_depth = [3,5,10,15,20,40, 55]
bagging_fraction = []
feature_fraction = []
drop_rate = []

5.4.4 - 1 贪心调参

best_obj = dict()
for obj in objective:
    model = LGBMRegressor(objective=obj)
    score = np.mean(cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)))
    best_obj[obj] = score
    
best_leaves = dict()
for leaves in num_leaves:
    model = LGBMRegressor(objective=min(best_obj.items(), key=lambda x:x[1])[0], num_leaves=leaves)
    score = np.mean(cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)))
    best_leaves[leaves] = score
    
best_depth = dict()
for depth in max_depth:
    model = LGBMRegressor(objective=min(best_obj.items(), key=lambda x:x[1])[0],
                          num_leaves=min(best_leaves.items(), key=lambda x:x[1])[0],
                          max_depth=depth)
    score = np.mean(cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)))
    best_depth[depth] = score
sns.lineplot(x=['0_initial','1_turning_obj','2_turning_leaves','3_turning_depth'], y=[0.143 ,min(best_obj.values()), min(best_leaves.values()), min(best_depth.values())])
<matplotlib.axes._subplots.AxesSubplot at 0x1834d252888>

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-pTBu8QTB-1585643127051)(output_86_1.png)]

5.4.4 - 2 Grid Search 调参

from sklearn.model_selection import GridSearchCV
parameters = {'objective': objective , 'num_leaves': num_leaves, 'max_depth': max_depth}
model = LGBMRegressor()
clf = GridSearchCV(model, parameters, cv=5)
clf = clf.fit(train_X, train_y)
clf.best_params_
{'max_depth': 40, 'num_leaves': 55, 'objective': 'regression'}
model = LGBMRegressor(objective='regression',
                          num_leaves=55,
                          max_depth=40)
np.mean(cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)))
0.14379304572952936

5.4.4 - 3 贝叶斯调参

from bayes_opt import BayesianOptimization
def rf_cv(num_leaves, max_depth, subsample, min_child_samples):
    val = cross_val_score(
        LGBMRegressor(objective = 'regression_l1',
            num_leaves=int(num_leaves),
            max_depth=int(max_depth),
            subsample = subsample,
            min_child_samples = int(min_child_samples)
        ),
        X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)
    ).mean()
    return 1 - val
rf_bo = BayesianOptimization(
    rf_cv,
    {
    'num_leaves': (2, 100),
    'max_depth': (2, 100),
    'subsample': (0.1, 1),
    'min_child_samples' : (2, 100)
    }
)
rf_bo.maximize()
|   iter    |  target   | max_depth | min_ch... | num_le... | subsample |
-------------------------------------------------------------------------
|  1        |  0.863    |  22.84    |  34.83    |  99.34    |  0.8796   |
|  2        |  0.8522   |  83.66    |  17.52    |  31.27    |  0.8104   |
|  3        |  0.8401   |  12.26    |  70.64    |  14.1     |  0.9035   |
|  4        |  0.8622   |  52.82    |  42.12    |  87.04    |  0.3299   |
|  5        |  0.8508   |  72.6     |  68.95    |  28.94    |  0.3273   |
|  6        |  0.863    |  99.96    |  94.19    |  99.23    |  0.5421   |
|  7        |  0.8631   |  90.84    |  2.475    |  99.34    |  0.8307   |
|  8        |  0.863    |  16.05    |  97.82    |  99.56    |  0.6616   |
|  9        |  0.8015   |  2.946    |  6.104    |  99.57    |  0.4299   |
|  10       |  0.7949   |  97.79    |  95.25    |  3.469    |  0.5979   |
|  11       |  0.8628   |  98.38    |  92.14    |  96.48    |  0.7543   |
|  12       |  0.8629   |  98.71    |  53.87    |  97.57    |  0.1395   |
|  13       |  0.7657   |  38.46    |  2.306    |  2.199    |  0.6655   |
|  14       |  0.8015   |  2.42     |  73.93    |  64.9     |  0.8561   |
|  15       |  0.7657   |  7.479    |  99.23    |  2.366    |  0.5894   |
|  16       |  0.7657   |  99.9     |  8.24     |  2.117    |  0.4889   |
|  17       |  0.863    |  59.63    |  96.77    |  98.46    |  0.6084   |
|  18       |  0.8585   |  99.67    |  35.61    |  55.5     |  0.7627   |
|  19       |  0.7657   |  54.1     |  52.56    |  2.834    |  0.7866   |
|  20       |  0.856    |  98.43    |  98.37    |  44.02    |  0.8604   |
|  21       |  0.8585   |  39.19    |  2.257    |  55.38    |  0.9486   |
|  22       |  0.8015   |  2.655    |  17.16    |  25.65    |  0.9142   |
|  23       |  0.8575   |  96.75    |  2.662    |  48.79    |  0.5475   |
|  24       |  0.8628   |  49.3     |  2.516    |  95.63    |  0.3689   |
|  25       |  0.8565   |  47.02    |  97.36    |  45.15    |  0.892    |
|  26       |  0.8632   |  40.13    |  66.03    |  99.89    |  0.1812   |
|  27       |  0.8572   |  42.57    |  48.11    |  48.67    |  0.9011   |
|  28       |  0.8597   |  74.9     |  10.61    |  64.42    |  0.9671   |
|  29       |  0.8632   |  66.48    |  28.57    |  99.3     |  0.9887   |
|  30       |  0.8592   |  74.97    |  71.13    |  59.46    |  0.9376   |
=========================================================================
1 - rf_bo.max['target']
0.13680780646067503

总结

在本章中,我们完成了建模与调参的工作,并对我们的模型进行了验证。此外,我们还采用了一些基本方法来提高预测的精度,提升如下图所示。

plt.figure(figsize=(13,5))
sns.lineplot(x=['0_origin','1_log_transfer','2_L1_&_L2','3_change_model','4_parameter_turning'], y=[1.36 ,0.19, 0.19, 0.14, 0.13])
<matplotlib.axes._subplots.AxesSubplot at 0x1834d621708>

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-D0Mr7vlh-1585643127051)(output_101_1.png)]

Task5 建模调参 END.

— By: 小雨姑娘

数据挖掘爱好者,多次获比赛TOP名次。
作者的机器学习笔记:https://zhuanlan.zhihu.com/mlbasic

关于Datawhale:

Datawhale是一个专注于数据科学与AI领域的开源组织,汇集了众多领域院校和知名企业的优秀学习者,聚合了一群有开源精神和探索精神的团队成员。Datawhale 以“for the learner,和学习者一起成长”为愿景,鼓励真实地展现自我、开放包容、互信互助、敢于试错和勇于担当。同时 Datawhale 用开源的理念去探索开源内容、开源学习和开源方案,赋能人才培养,助力人才成长,建立起人与人,人与知识,人与企业和人与未来的联结。

本次数据挖掘路径学习,专题知识将在天池分享,详情可关注Datawhale:
(图片!!!)

发布了154 篇原创文章 · 获赞 52 · 访问量 1万+

猜你喜欢

转载自blog.csdn.net/qq_44315987/article/details/105225585
今日推荐