xgboost and Comparative GBDT; xgboost and lightGbm

GBDT VS XGBOOST

1. In conventional GBDT group as CART classifier, especially to enhance the gradient decision tree algorithm, while XGBoost supports linear classifiers (gblinear), this time with the equivalent XGBoost L1 and L2 canonical regularization term logistics regression (classification) or a linear regression (regression problems)
 2. conventional GBDT only use the first derivative information, xgboost the cost function for a second-order Taylor expansion, while the use of first and second order derivatives.
 3. xgboost cost function added a regularization term for the complexity of the control model. Regularization term comprising a number of leaf nodes in the tree, the square of the L2 mode output score on each node and leaf. From Bias variance tradeoff point of view, regularization reduces variance model to make the model easier to learn out to prevent over-fitting, which is xgboost better than a traditional characteristic of GBDT

 

The following statement in regard to the classifier is not correct ()

The correct answer: C your answers: D (error)

SVM goal is to find as much as possible and make the training data separate classification maximum interval hyperplane belonging to structural risk minimization
Naive Bayes is a special Bayes classifier, it is assumed that a variable for each independently.
Xgboost is an excellent integration algorithm, which advantages include high speed, is not sensitive to outliers, support for custom functions, and so loss
Random forests column sampling process to ensure randomness, so even if no pruning, not prone to over-fitting.
GBDT core of every tree that is learned and conclusions before all the trees of the residuals, the residual value forecast is a plus to get real worth cumulative amount, xgboost and GBDT almost, but also supports linear classifier
xgboost can customize the loss function, very fast, but very sensitive to outliers
 
 

 


 

Guess you like

Origin www.cnblogs.com/ivyharding/p/11390735.html