Decision tree related algorithms

Bowen record the relevant principles of decision tree algorithm, used to deal with interview questions ...

Decision Tree

Information entropy decision tree:
e n t r O p Y ( D ) = i = 1 n P i l O g 2 P i entropy(D) = -\sum_{i=1}^n P_ilog_2 P_i
G a i n ( A ) = e n t r o p y ( D ) e n t r o p y A ( D ) Gain(A) = entropy(D) - entropy_A(D)
maximize G a i n A Gain(A)

Random Forests

Multiple decision trees, just different training set. Each of them from the tree back to the training set has N samples extracted trained. Final vote on the classification results.

Gradient Boosting Tree

Or multiple decision trees, to generate a tree just after learning the results based on the previous generation of trees.

Ant Colony Algorithm

Random search algorithm
ita is a partial information, i.e. heuristic factor (visibility). Initially decided on.
tor for the global information, i.e., the amount of pheromone. Each iteration will be updated.

With TSP, for example, each time from the city to the probability r s cities is as follows:
Here Insert Picture DescriptionWhen all ants have completed their path, updated tor:
Incremental optimal path with this iteration of ants came to be calculated.

Published 40 original articles · won praise 44 · views 90000 +

Guess you like

Origin blog.csdn.net/Site1997/article/details/85011199