Random Forests Parameter Description

Why should adjust the machine learning algorithms?

A month ago, I participated in a contest called TFI in kaggle. I submit the results of the first 50%. I spent tireless efforts over two weeks time on the characteristics of works, barely 20%. To my surprise thing, after adjusting machine learning algorithm parameters, I was able to reach the top 10%.

This is this is the importance of machine learning algorithms parameter tuning. Random Forest is one of the simplest machine learning tools used in the industry. In our previous articles, we've introduced you to the forest and random and CART models were compared. Machine learning toolkit precisely because of the performance of these algorithms are well known to people. .

What random forests?

Random Forests is an integrated tool that uses a subset of a subset of the observation data and variables to create a decision tree. It builds more of these decision trees, and then merge them together to get a more accurate and stable predictions. Doing so is the fact that the most direct, independent predictions this group, get a highest vote by voting, this alone is better than the best results predicted by the model.

 We usually random forests as a black box, enter data and then gives the predicted results, without worrying about how the model is calculated. The black box itself there are a few that we can play with leverage. Each lever can affect the performance or resource model to a certain extent - time balance. In this article, we will talk more leverage we can adjust, as well as the establishment of a random forest model.

Random parameter adjustment Forest / Lever

Parameter random forests which can increase the predictive power of the model, and the model can make the training easier. Below we will discuss in detail the various parameters (please note that these parameters, I am using Python conventional nomenclature):

1. characterized in that the model predicts better

There are three types of features may be adjusted to improve the predictive power of the model:

A. max_features:

The maximum number of random forest feature allows a single use of decision trees. Python provides several options for the maximum number of features. Here are a few:

  1. Auto / None: simply select all features, every single tree can use them. In this case, every single tree do not have any limitation.

  2. sqrt: This option may be utilized every single sub-tree root in the total number of features. For example, if the total number of the variable (feature) is 100, so that every single one of the sub-tree can only take 10. "Log2" is another type of similar options.

  3. 0.2:此选项允许每个随机森林的子树可以利用变量(特征)数的20%。如果想考察的特征x%的作用, 我们可以使用“0.X”的格式。

max_features如何影响性能和速度?

增加max_features一般能提高模型的性能,因为在每个节点上,我们有更多的选择可以考虑。 然而,这未必完全是对的,因为它降低了单个树的多样性,而这正是随机森林独特的优点。 但是,可以肯定,你通过增加max_features会降低算法的速度。 因此,你需要适当的平衡和选择最佳max_features。

B. n_estimators:

在利用最大投票数或平均值来预测之前,你想要建立子树的数量。 较多的子树可以让模型有更好的性能,但同时让你的代码变慢。 你应该选择尽可能高的值,只要你的处理器能够承受的住,因为这使你的预测更好更稳定。

C. min_sample_leaf:

如果您以前编写过一个决策树,你能体会到最小样本叶片大小的重要性。 叶是决策树的末端节点。 较小的叶子使模型更容易捕捉训练数据中的噪声。 一般来说,我更偏向于将最小叶子节点数目设置为大于50。在你自己的情况中,你应该尽量尝试多种叶子大小种类,以找到最优的那个。

2.使得模型训练更容易的特征

有几个属性对模型的训练速度有直接影响。 对于模型速度,下面是一些你可以调整的关键参数:

A. n_jobs:

这个参数告诉引擎有多少处理器是它可以使用。 “-1”意味着没有限制,而“1”值意味着它只能使用一个处理器。 下面是一个用Python做的简单实验用来检查这个指标:

  1.  
    %timeit
  2.  
    model = RandomForestRegressor(n_estimator = 100, oob_score = TRUE,n_jobs = 1,random_state =1)
  3.  
    model.fit(X,y)
  4.  
    Output ———- 1 loop best of 3 : 1.7 sec per loop
  5.  
     
  6.  
    %timeit
  7.  
    model = RandomForestRegressor(n_estimator = 100,oob_score = TRUE,n_jobs = -1,random_state =1)
  8.  
    model.fit(X,y)
  9.  
    Output ———- 1 loop best of 3 : 1.1 sec per loop

“%timeit”是一个非常好的功能,他能够运行函数多次并给出了最快循环的运行时间。 这出来非常方便,同时将一个特殊的函数从原型扩展到最终数据集中。

B. random_state:

此参数让结果容易复现。 一个确定的随机值将会产生相同的结果,在参数和训练数据不变的情况下。 我曾亲自尝试过将不同的随机状态的最优参数模型集成,有时候这种方法比单独的随机状态更好。

C. oob_score:

这是一个随机森林交叉验证方法。 它和留一验证方法非常相似,但这快很多。 这种方法只是简单的标记在每颗子树中用的观察数据。 然后对每一个观察样本找出一个最大投票得分,是由那些没有使用该观察样本进行训练的子树投票得到

Guess you like

Origin www.cnblogs.com/hai5111/p/11521393.html