机器学习-模型融合

参考:台大机器学习技法  http://blog.csdn.net/lho2010/article/details/42927287

          stacking&blending  http://heamy.readthedocs.io/en/latest/usage.html

1.stacking&blending

blending:

比如数据分成train和test,对于model_i(比如xgboost,GBDT等等) 
对train做CV fold=5,使用其中4份做训练数据,另外一份作为val数据,得出模型model_i_j,然后对val预测生成向量v_i_j,对test预测生成向量t_i_j 
同样的方式做5次,把所有train都预测完一边遍,将5份向量concat对应生成t_i与v_i
每个模型都能生成这样两组向量,一个是训练集的,一个是测试集的(测试集的在同一个模型预测多次后取平均)
有多少个模型就生成多少维的向量
然后在顶层的模型比如LR或者线性模型对v向量进行训练,生成模型对t向量进行预测

id model_1 model_2 model_3 model_4 label
1 0.1 0.2 0.14 0.15 0
2 0.2 0.22 0.18 0.3 1
3 0.8 0.7 0.88 0.6 1
4 0.3 0.3 0.2 0.22 0
5 0.5 0.3 0.6 0.5 1


stacking:

将数据划分成train,test,然后将train划分成不相交的两部分train_1,train_2

使用不同的模型对train_1训练,对train_2和test预测,生成两个1维向量,有多少模型就生成多少维向量

第二层使用前面模型对train_2生成的向量和label作为新的训练集,使用LR或者其他模型训练一个新的模型来预测test生成的向量



两者区别是:

数据划分方式不同,blending在划分完train,test之后,将train进行cv划分来训练。也就是说第二层用到了第一层的全部数据

stacking是划分完train,test之后对train划分为2份不相交的数据,一份训练,一份用来生成新的特征,在第二层用来训练,第二层只用到了部分数据



下面是一个blending代码

  1. from __future__ import division
  2. import numpy as np
  3. import load_data
  4. from sklearn.cross_validation import StratifiedKFold
  5. from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier, GradientBoostingClassifier
  6. from sklearn.linear_model import LogisticRegression
  7. from utility import *
  8. from evaluator import *
  9. def logloss(attempt, actual, epsilon=1.0e-15):
  10. """Logloss, i.e. the score of the bioresponse competition.
  11. """
  12. attempt = np.clip(attempt, epsilon, 1.0-epsilon)
  13. return - np.mean(actual * np.log(attempt) + ( 1.0 - actual) * np.log( 1.0 - attempt))
  14. if __name__ == '__main__':
  15. np.random.seed( 0) # seed to shuffle the train set
  16. # n_folds = 10
  17. n_folds = 5
  18. verbose = True
  19. shuffle = False
  20. # X, y, X_submission = load_data.load()
  21. train_x_id, train_x, train_y = preprocess_train_input()
  22. val_x_id, val_x, val_y = preprocess_val_input()
  23. X = train_x
  24. y = train_y
  25. X_submission = val_x
  26. X_submission_y = val_y
  27. if shuffle:
  28. idx = np.random.permutation(y.size)
  29. X = X[idx]
  30. y = y[idx]
  31. skf = list(StratifiedKFold(y, n_folds))
  32. clfs = [RandomForestClassifier(n_estimators= 100, n_jobs= -1, criterion= 'gini'),
  33. RandomForestClassifier(n_estimators= 100, n_jobs= -1, criterion= 'entropy'),
  34. ExtraTreesClassifier(n_estimators= 100, n_jobs= -1, criterion= 'gini'),
  35. ExtraTreesClassifier(n_estimators= 100, n_jobs= -1, criterion= 'entropy'),
  36. GradientBoostingClassifier(learning_rate= 0.05, subsample= 0.5, max_depth= 6, n_estimators= 50)]
  37. print "Creating train and test sets for blending."
  38. dataset_blend_train = np.zeros((X.shape[ 0], len(clfs)))
  39. dataset_blend_test = np.zeros((X_submission.shape[ 0], len(clfs)))
  40. for j, clf in enumerate(clfs):
  41. print j, clf
  42. dataset_blend_test_j = np.zeros((X_submission.shape[ 0], len(skf)))
  43. for i, (train, test) in enumerate(skf):
  44. print "Fold", i
  45. X_train = X[train]
  46. y_train = y[train]
  47. X_test = X[test]
  48. y_test = y[test]
  49. clf.fit(X_train, y_train)
  50. y_submission = clf.predict_proba(X_test)[:, 1]
  51. dataset_blend_train[test, j] = y_submission
  52. dataset_blend_test_j[:, i] = clf.predict_proba(X_submission)[:, 1]
  53. dataset_blend_test[:,j] = dataset_blend_test_j.mean( 1)
  54. print( "val auc Score: %0.5f" % (evaluate2(dataset_blend_test[:,j], X_submission_y)))
  55. print
  56. print "Blending."
  57. # clf = LogisticRegression()
  58. clf = GradientBoostingClassifier(learning_rate= 0.02, subsample= 0.5, max_depth= 6, n_estimators= 100)
  59. clf.fit(dataset_blend_train, y)
  60. y_submission = clf.predict_proba(dataset_blend_test)[:, 1]
  61. print "Linear stretch of predictions to [0,1]"
  62. y_submission = (y_submission - y_submission.min()) / (y_submission.max() - y_submission.min())
  63. print "blend result"
  64. print( "val auc Score: %0.5f" % (evaluate2(y_submission, X_submission_y)))
  65. print "Saving Results."
  66. np.savetxt(fname= 'blend_result.csv', X=y_submission, fmt= '%0.9f')


2.rank_avg

这种融合方法适合排序评估指标,比如auc之类的


其中weight_i为该模型权重,权重为1表示平均融合

rank_i表示样本的升序排名 ,也就是越靠前的样本融合后也越靠前

能较快的利用排名融合多个模型之间的差异,而不用去加权样本的概率值融合


3.weighted

加权融合,给模型一个权重weight,然后加权得到最终结果

weight为0.5时为均值融合,result_i为模型i的输出

一般会考虑多个模型之间的相似度和得分情况

得分高的模型权重大,尽量融合相似度相对低的模型


4.bagging

从特征,参数,样本的多样性差异性来做多模型融合,参考随机森林


5.boosting

参考adaboost,gbdt,xgboost

转自:https://blog.csdn.net/bryan__/article/details/51229032



猜你喜欢

转载自blog.csdn.net/qq_41994006/article/details/80908560