集成学习与随机森林

集成学习

import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
X, y = datasets.make_moons(n_samples=500, noise=0.3, random_state=42)
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
plt.show()

img

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)

from sklearn.linear_model import LogisticRegression
log_clf = LogisticRegression()
log_clf.fit(X_train, y_train)
log_clf.score(X_test, y_test)# ->0.86399999999

from sklearn.svm import SVC
svm_clf = SVC()
svm_clf.fit(X_train, y_train)
svm_clf.score(X_test, y_test)#->0.8880000000001

from sklearn.tree import DecisionTreeClassifier
dt_clf = DecisionTreeClassifier(random_state=666)
dt_clf.fit(X_train, y_train)
dt_clf.score(X_test, y_test)#->0.86399999999999999
y_predict1 = log_clf.predict(X_test)
y_predict2 = svm_clf.predict(X_test)
y_predict3 = dt_clf.predict(X_test)
y_predict = np.array((y_predict1 + y_predict2 + y_predict3) >= 2, dtype='int')
y_predict[:10] #->array([1, 0, 0, 1, 1, 1, 0, 0, 0, 0])
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_predict)#->0.89600000000000002

使用Voting Classifier

hard-voting-classifier : 少数服从多数

from sklearn.ensemble import VotingClassifier
voting_clf = VotingClassifier(estimators=[
    ('log_clf', LogisticRegression()), 
    ('svm_clf', SVC()),
    ('dt_clf', DecisionTreeClassifier(random_state=666))],                         voting='hard')
voting_clf.fit(X_train, y_train)
voting_clf.score(X_test, y_test)#->0.89600000000000002

soft-voting-classifier:投票需要有权重

img

使用 Soft Voting Classifier

voting_clf2 = VotingClassifier(estimators=[
    ('log_clf', LogisticRegression()),     
    ('svm_clf', SVC(probability=True)),  #SVC默认不计算概率  
    ('dt_clf', DecisionTreeClassifier(random_state=666))],    
                             voting='soft')    
voting_clf2.fit(X_train, y_train)    
voting_clf2.score(X_test, y_test)   #->0.91200000000000003

Bagging 和 Pasting

  • 虽然有很多机器学习方法,但是从投票的角度看,仍然不够多。
  • 创建更多的子模型,集成更多子模型的意见。
  • 子模型之间不能一致,子模型之间要有差异性。
  • 创建差异性 1
    • 每个子模型只看样本数据的一部分。例如:一共有500个样本数据,每个子模型只看100个样本数据(可以是同样的分类器)
    • 每个子模型不需要太高的准确率——集成学习的威力所在

  • 创造差异性 2
    • 取样:放回取样,不放回取样
    • 放回取样更常用(可以选择的样本数多,并且不是很依赖于取样的随机)
    • 令Bagging分类器中的每一个子模型都使用决策树(非参数的学习方式,不对目标函数的形式作出强烈假设,通过不做假设,它们可以从训练数据中自由地学习任何函数形式)更能产生差异较大的子模型

使用 Bagging(放回取样,统计学中叫做bootstrap)

from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import BaggingClassifier
bagging_clf = BaggingClassifier(DecisionTreeClassifier(),
                           n_estimators=500, max_samples=100,
                           bootstrap=True)#n_estimators=500集成的决策树模型数 max_samples=100每一个子模型样本个数
bagging_clf.fit(X_train, y_train)
bagging_clf.score(X_test, y_test)#->0.91200000000000003
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import BaggingClassifier
bagging_clf = BaggingClassifier(DecisionTreeClassifier(),
                           n_estimators=5000, max_samples=100,
                           bootstrap=True)
bagging_clf.fit(X_train, y_train)
bagging_clf.score(X_test, y_test)#->0.92000000000000004

oob和更多Bagging相关

oob(out of bag)

做放回取样时,有一定的概率会出现一部分样本取不到的情况(平均大约有37%的样本取不到)

from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import BaggingClassifier
bagging_clf = BaggingClassifier(DecisionTreeClassifier(),
                               n_estimators=500, max_samples=100,
                               bootstrap=True, oob_score=True)
bagging_clf.fit(X, y)

Out[4]:

BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=None,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=100, n_estimators=500, n_jobs=1, oob_score=True,
         random_state=None, verbose=0, warm_start=False)

使用oob_score_

bagging_clf.oob_score_ #->0.91800000000000004

n_jobs

  • 集成学习极易进行并行化处理。
  • 样本选取是独立的
  • 独立地训练若干个子模型
%%time
bagging_clf = BaggingClassifier(DecisionTreeClassifier(),
                               n_estimators=500, max_samples=100,
                               bootstrap=True, oob_score=True)
bagging_clf.fit(X, y)
CPU times: user 1.81 s, sys: 27.2 ms, total: 1.84 s
Wall time: 2.95 s
%%time
bagging_clf = BaggingClassifier(DecisionTreeClassifier(),
                               n_estimators=500, max_samples=100,
                               bootstrap=True, oob_score=True,
                               n_jobs=-1)
bagging_clf.fit(X, y)
CPU times: user 385 ms, sys: 56.1 ms, total: 441 ms
Wall time: 1.83 s

bootstrap_features

  • 针对特征进行随机取样 Random Subspaces(随机子空间)
  • 既针对样本,又针对特征进行随机取样 Random Patches
  • Random Patches在二维图像上的体现就是即在行维度随机,又在列维度随机
random subspaces
random_subspaces_clf = BaggingClassifier(DecisionTreeClassifier(),
                               n_estimators=500, max_samples=500,
                               bootstrap=True, oob_score=True,
                               max_features=1, bootstrap_features=True)
#对样本的随机取样关闭:max_samples=500 令每次取样的最大样本个数等于样本总数
#max_features对特征随机取样,每次看1一个特征;
#对特征取样的方式为放回取样
random_subspaces_clf.fit(X, y)
random_subspaces_clf.oob_score_#->0.83399999999999996
random patches
random_patches_clf = BaggingClassifier(DecisionTreeClassifier(),
                               n_estimators=500, max_samples=100,
                               bootstrap=True, oob_score=True,
                               max_features=1, bootstrap_features=True)    
#对特征随机取样的同时,对样本随机取样max_samples=100
random_patches_clf.fit(X, y)
random_patches_clf.oob_score_#->0.85799999999999998

随机森林

  • Base Estimator:Decision Tree 全部使用决策树作为集成学习的基础分类器
  • 决策树在节点划分上,在随机的特征子集上寻找最优划分特征
from sklearn.ensemble import RandomForestClassifier
rf_clf = RandomForestClassifier(n_estimators=500, oob_score=True, random_state=666, n_jobs=-1)
rf_clf.fit(X, y)

Out[4]:

RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
            max_depth=None, max_features='auto', max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, n_estimators=500, n_jobs=-1,
            oob_score=True, random_state=666, verbose=0, warm_start=False)
rf_clf.oob_score_ #->0.89200000000000002
rf_clf2 = RandomForestClassifier(n_estimators=500, max_leaf_nodes=16, oob_score=True, random_state=666, n_jobs=-1)#max_leaf_nodes每棵决策树最多的叶子节点数
rf_clf2.fit(X, y)
rf_clf2.oob_score_#->0.90600000000000003

随机森林拥有决策树和BaggingClassifier的所有参数

Extra-Trees极其随机森林

  • 决策树在节点划分上,使用随机的特征和随机的阈值
  • 提供额外的随机性,抑制过拟合(因为每棵决策树都极其随机,抑制了方差),但增大了bias(偏差)
  • 节点的划分毫不费劲,因此有更加快的训练速度
from sklearn.ensemble import ExtraTreesClassifier
et_clf = ExtraTreesClassifier(n_estimators=500, bootstrap=True, oob_score=True, random_state=666, n_jobs=-1)
et_clf.fit(X, y)
et_clf.oob_score_ #->0.89200000000000002

集成学习解决回归问题

from sklearn.ensemble import BaggingRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import ExtraTreesRegressor

另一种集成学习:Boosting

  • 集成多个模型
  • 每个模型都在尝试增强(Boosting)整体的效果

AdaBoosting

  • 在新的一次学习中,提高上一次学习过程中和模型差距大的点的权重,产生一个新的子模型
  • 最终让所有的子模型进行投票

from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import AdaBoostClassifier
ada_clf = AdaBoostClassifier(
    DecisionTreeClassifier(max_depth=2), n_estimators=500)
ada_clf.fit(X_train, y_train)

Out[5]:

AdaBoostClassifier(algorithm='SAMME.R',
          base_estimator=DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=2,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=None,
            splitter='best'),
          learning_rate=1.0, n_estimators=500, random_state=None)
ada_clf.score(X_test, y_test)#->0.85599999999999998

Gradient Boosting

  • 训练一个模型m1,产生错误e1
  • 针对e1训练第二个模型m2,产生错误m2
  • 针对e2训练第三个模型m3,产生错误e3……
  • 最终预测结果是m1+m2+m3+…
from sklearn.ensemble import GradientBoostingClassifier
gb_clf = GradientBoostingClassifier(max_depth=2, n_estimators=30)
gb_clf.fit(X_train, y_train)
GradientBoostingClassifier(criterion='friedman_mse', init=None,
              learning_rate=0.1, loss='deviance', max_depth=2,
              max_features=None, max_leaf_nodes=None,
              min_impurity_decrease=0.0, min_impurity_split=None,
              min_samples_leaf=1, min_samples_split=2,
              min_weight_fraction_leaf=0.0, n_estimators=30,
              presort='auto', random_state=None, subsample=1.0, verbose=0,
              warm_start=False)
gb_clf.score(X_test, y_test)#->0.90400000000000003

Boosting 解决回归问题

from sklearn.ensemble import AdaBoostRegressor
from sklearn.ensemble import GradientBoostingRegressor

### Stacking

  • Stacking中,在Layer1中,用三个模型的结果作为Layer2的输入,添加一个模型再训练一次新模型。
  • Layer2中同样可以设置多个模型,再汇集到Layer3…
  • 每次训练需要用不同的训练数据集。因此,开始要把训练集分成3份,第一份训练Layer1,第二份和第一份的结果训练Layer2
  • 回归问题中,令每一个模型得出一个概率值

猜你喜欢

转载自blog.csdn.net/zhaohaibo_/article/details/80599723