Simple and crude understanding and implementation of integrated machine learning Learning (II): Bagging integration principle, random forest construction process, random forest api reports, random forest predict cases, bagging integration advantages

Integrated learning

learning target

  • Understand two core tasks of solving major integrated learning
  • You know the principles of integrated bagging
  • We know the process of establishing a random decision tree forest
  • Why do I need to know random with replacement (Bootstrap) sampling
  • Random Forest algorithm to achieve application RandomForestClassifie
  • We know the principle of boosting integration
  • Know the difference between bagging and boosting the
  • Learn gbdt implementation process
    Here Insert Picture Description

5.2 Bagging

1 Bagging integration principle

Goal: The following classification circle and square

[Image dump the chain fails, the source station may have security chain mechanism, it is recommended to save the picture down uploaded directly (img-pUrnHP6u-1583250387272) (../ images / bagging1.png)]

Implementation process:

1. Sampling different data sets

[Image dump the chain fails, the source station may have security chain mechanism, it is recommended to save the picture down uploaded directly (img-WZBTlQwQ-1583250387273) (../ images / bagging2.png)]

2. Training classifier

[Image dump the chain fails, the source station may have security chain mechanism, it is recommended to save the picture down uploaded directly (img-BXmawttL-1583250387274) (../ images / bagging3.png)]

3. equal rights to vote, to obtain the final result

[Image dump the chain fails, the source station may have security chain mechanism, it is recommended to save the picture down uploaded directly (img-h7oUUBBw-1583250387274) (../ images / bagging4.png)]

4. Summary The main implementation process

[Image dump the chain fails, the source station may have security chain mechanism, it is recommended to save the picture down uploaded directly (img-JAYDopee-1583250387274) (../ images / bagging5.png)]

2 random forest construction process

In machine learning, a random forest classifier comprising a plurality of decision trees , and the output categories are categories of the mode output from the individual tree may be.

Random Forests = Bagging + tree

[Image dump the chain fails, the source station may have security chain mechanism, it is recommended to save the picture down uploaded directly (img-XT2sHPlQ-1583250387275) (../ images / tf1.png)]

For example, if you trained five trees, where the results for four of the tree is True, the result of a tree is False, then the final vote result is True

Random Forests be a critical step in the manufacturing process (represented by N by the number of training cases (samples), M is the number of features):

1) a randomly selected sample, sampling with replacement, repeated N times (possible duplicate samples appear)

2) to randomly select the m feature, m << M, the decision tree

  • Think

    • 1. Why a random sample of the training set?

      If you are not a random sample, each tree training set are the same, then the final training a tree classification result is exactly the same

    • 2. Why do you need replacement, sampling?

      如果不是有放回的抽样,那么每棵树的训练样本都是不同的,都是没有交集的,这样每棵树都是“有偏的”,都是绝对“片面的”(当然这样说可能不对),也就是说每棵树训练出来都是有很大的差异的;而随机森林最后分类取决于多棵树(弱分类器)的投票表决。

3 随机森林api介绍

  • sklearn.ensemble.RandomForestClassifier(n_estimators=10, criterion=’gini’, max_depth=None, bootstrap=True, random_state=None, min_samples_split=2)
    • n_estimators:integer,optional(default = 10)森林里的树木数量120,200,300,500,800,1200
    • Criterion:string,可选(default =“gini”)分割特征的测量方法
    • max_depth:integer或None,可选(默认=无)树的最大深度 5,8,15,25,30
    • max_features="auto”,每个决策树的最大特征数量
      • If “auto”, then max_features=sqrt(n_features).
      • If “sqrt”, then max_features=sqrt(n_features)(same as “auto”).
      • If “log2”, then max_features=log2(n_features).
      • If None, then max_features=n_features.
    • bootstrap:boolean,optional(default = True)是否在构建树时使用放回抽样
    • min_samples_split:节点划分最少样本数
    • min_samples_leaf:叶子节点的最小样本数
  • 超参数:n_estimator, max_depth, min_samples_split,min_samples_leaf

4 随机森林预测案例

  • 实例化随机森林
# 随机森林去进行预测
rf = RandomForestClassifier()
  • Select list defines the parameters over
param = {"n_estimators": [120,200,300,500,800,1200], "max_depth": [5, 8, 15, 25, 30]}
  • Use GridSearchCV mesh search
# 超参数调优
gc = GridSearchCV(rf, param_grid=param, cv=2)

gc.fit(x_train, y_train)

print("随机森林预测的准确率为:", gc.score(x_test, y_test))

note

  • The process of establishing a random forest
  • Depth, number of trees and other trees need to be tuned over the parameter

5 bagging integration advantages

Bagging + tree / linear regression / logistic regression / deep learning ensemble learning ... = bagging

After the above integrated learning mode consisting of:

  1. Can improve the generalization accuracy rate of about 2% in the original algorithm
  2. Simple, convenient, versatile
Published 627 original articles · won praise 839 · views 110 000 +

Guess you like

Origin blog.csdn.net/qq_35456045/article/details/104644899