3.2 Decision tree

3.2 Decision tree

Li Mu

Station B: https://space.bilibili.com/1567748478/channel/collectiondetail?sid=28144
Course homepage: https://c.d2l.ai/stanford-cs329p/

1. Decision tree

A decision tree is a tree structure used to make decisions. It can be divided into classification tree and regression tree .

The left is an example of classification, and the output is categories, labels, etc.

On the right is a regression example. The output is no longer a category, but specific data. The purpose is to predict the house price.

Decision tree (single tree)

  • advantage:

    • interpretability . Decision tree is one of the few machine learning algorithms that can be explained . We can see the specific decision-making process and which branch to go.

    • Both numeric and categorical features can be handled. If it is a numerical feature, the decision node depends on whether the input value is greater than or less than a certain value. If it is a feature of a category class, the decision node is to see if the input value is of this type.

  • shortcoming:

    • The robustness is not strong . The decision tree is generated by continuously classifying decision points based on data. If the data is mixed with noise or the distribution changes, the decision tree will also change, and the data in the decision node will also change. Solution: Integrated learning to help, generate a large number of particularly complex numbers, and list all kinds of data situations.

    • Complex decision trees can overfit . Reason: too many branches &

Guess you like

Origin blog.csdn.net/ch_ccc/article/details/129921163