Integrated learning
Article Directory
learning target
- Understand two core tasks of solving major integrated learning
- You know the principles of integrated bagging
- We know the process of establishing a random decision tree forest
- Why do I need to know random with replacement (Bootstrap) sampling
- Random Forest algorithm to achieve application RandomForestClassifie
- We know the principle of boosting integration
- Know the difference between bagging and boosting the
- Learn gbdt implementation process
5.2 Bagging
1 Bagging integration principle
Goal: The following classification circle and square
Implementation process:
1. Sampling different data sets
2. Training classifier
3. equal rights to vote, to obtain the final result
4. Summary The main implementation process
2 random forest construction process
In machine learning, a random forest classifier comprising a plurality of decision trees , and the output categories are categories of the mode output from the individual tree may be.
Random Forests = Bagging + tree
For example, if you trained five trees, where the results for four of the tree is True, the result of a tree is False, then the final vote result is True
Random Forests be a critical step in the manufacturing process (represented by N by the number of training cases (samples), M is the number of features):
1) a randomly selected sample, sampling with replacement, repeated N times (possible duplicate samples appear)
2) to randomly select the m feature, m << M, the decision tree
-
Think
-
1. Why a random sample of the training set?
If you are not a random sample, each tree training set are the same, then the final training a tree classification result is exactly the same
-
2. Why do you need replacement, sampling?
如果不是有放回的抽样,那么每棵树的训练样本都是不同的,都是没有交集的,这样每棵树都是“有偏的”,都是绝对“片面的”(当然这样说可能不对),也就是说每棵树训练出来都是有很大的差异的;而随机森林最后分类取决于多棵树(弱分类器)的投票表决。
-
3 随机森林api介绍
- sklearn.ensemble.RandomForestClassifier(n_estimators=10, criterion=’gini’, max_depth=None, bootstrap=True, random_state=None, min_samples_split=2)
- n_estimators:integer,optional(default = 10)森林里的树木数量120,200,300,500,800,1200
- Criterion:string,可选(default =“gini”)分割特征的测量方法
- max_depth:integer或None,可选(默认=无)树的最大深度 5,8,15,25,30
- max_features="auto”,每个决策树的最大特征数量
- If “auto”, then
max_features=sqrt(n_features)
. - If “sqrt”, then
max_features=sqrt(n_features)
(same as “auto”). - If “log2”, then
max_features=log2(n_features)
. - If None, then
max_features=n_features
.
- If “auto”, then
- bootstrap:boolean,optional(default = True)是否在构建树时使用放回抽样
- min_samples_split:节点划分最少样本数
- min_samples_leaf:叶子节点的最小样本数
- 超参数:n_estimator, max_depth, min_samples_split,min_samples_leaf
4 随机森林预测案例
- 实例化随机森林
# 随机森林去进行预测
rf = RandomForestClassifier()
- Select list defines the parameters over
param = {"n_estimators": [120,200,300,500,800,1200], "max_depth": [5, 8, 15, 25, 30]}
- Use GridSearchCV mesh search
# 超参数调优
gc = GridSearchCV(rf, param_grid=param, cv=2)
gc.fit(x_train, y_train)
print("随机森林预测的准确率为:", gc.score(x_test, y_test))
note
- The process of establishing a random forest
- Depth, number of trees and other trees need to be tuned over the parameter
5 bagging integration advantages
Bagging + tree / linear regression / logistic regression / deep learning ensemble learning ... = bagging
After the above integrated learning mode consisting of:
- Can improve the generalization accuracy rate of about 2% in the original algorithm
- Simple, convenient, versatile