《统计学习方法》学习笔记(第五章)

第5章 决策树

ID3算法(基于信息增益)

entropy(信息熵): H ( x ) = i = 1 n p i log p i H(x) = -\sum_{i=1}^{n}p_i\log{p_i}

#熵
def calc_ent(datasets):
    data_length = len(datasets)
    #类别是/否
    label_count = {}
    for i in range(data_length):
        label = datasets[i][-1]
        if label not in label_count:
            label_count[label] = 0
        label_count[label] += 1
    ent = -sum([(p/data_length)*log(p/data_length,2)for p in label_count.values()])
    return ent

conditional entropy(条件熵): H ( X Y ) = P ( X Y ) log P ( X Y ) H(X|Y)=\sum{P(X|Y)}\log{P(X|Y)}

#经验条件熵
def cond_ent(datasets,axis=0):
    data_length = len(datasets)
    feature_sets = {}
    for i in range(data_length):
        feature = datasets[i][axis]
        if feature not in feature_sets:
            feature_sets[feature] = []
        feature_sets[feature].append(datasets[i])
    cond_ent = sum([(len(p)/data_length)*calc_ent(p)for p in feature_sets.values()])
    return cond_ent

information gain(信息增益) : g ( D , A ) = H ( D ) H ( D A ) g(D, A)=H(D)-H(D|A)

#信息增益
def info_gain(ent,cond_ent):
    return ent - cond_ent

对训练数据集,计算其每个特征的信息增益比,并比较她们的大小,选择信息增益最大的特征。信息增益大的特征具有更强的分类能力

#进行特征选择
def info_gain_train(datasets):
    count = len(datasets[0])-1
    ent = calc_ent(datasets)
    best_feature = []
    for c in range(count):
        c_info_gain = info_gain(ent,cond_ent(datasets,axis=c))
        best_feature.append((c,c_info_gain))
        print('特征({}) - info_gain - {:.3f}'.format(labels[c],c_info_gain))
    #比较大小
    best_ = max(best_feature,key=lambda x:x[-1])
    return '特征({})的信息增益最大,选择为根节点特征'.format(labels[best_[0]])
C4.5(基于信息增益比)

information gain ratio: g R ( D , A ) = g ( D , A ) H ( A ) g_R(D, A) = \frac{g(D,A)}{H(A)}

CART(Gini指数)

Gini index: G i n i ( D ) = k = 1 K p k log p k = 1 k = 1 K p k 2 Gini(D)=\sum_{k=1}^{K}p_k\log{p_k}=1-\sum_{k=1}^{K}p_k^2
对于二分类问题,
Gini(p)=2p(1-p)
特征A条件下,集合D的基尼指数:
Gini(D,A) = D 1 D G i n i ( D 1 ) + D 2 D G i n i ( D 2 ) = \frac{|D1|}{|D|}{Gini(D1)}+\frac{|D2|}{|D|}{Gini(D2)}

  • 静态方法(staticmethod)

python staticmethod 返回函数的静态方法
通常情况下,在类中定义的所有函数都是对象的绑定方法,对象在调用绑定方法时会自动将自己作为参数传递给方法的第一个参数。除此之外还有两种常见的方法:静态方法和类方法
@staticmethod位于类定义的命名空间中,不会对任何实例类型进行操作
把类中的函数定义成静态方法该方法不强制要求传递参数,如下声明一个静态方法:

class C(object):
    @staticmethod
    def f(arg1, arg2, ...):
    ...

类可以不用实例化就可以调用该方法 C.f()
也可以实例化后调用 C().f()

  • Graphviz

Graphviz是开源的图形可视化软件,是将结构信息表示为抽象图和网络图的一种方法。

python安装了graphviz包,使用时报错:failed to execute [‘dot’, ‘-Tsvg’], make sure the Graphviz executables are on your systems’ PATH

  1. https://graphviz.gitlab.io/_pages/Download/Download_windows.html 下载graphviz-2.38.zip
  2. 设置环境变量
    一是设置用户环境,D:\Anaconda\envs\python36\graphviz-2.38\release\bin
    二是设置系统变量,D:\Anaconda\envs\python36\graphviz-2.38\release\bin\dot.exe
    检测环境变量是否生效:输入命令:dot -version
    未生效时,可以检查路径是否正确,然后重启
  3. 为Python加载Graphviz
    cmd 中直接使用命令 pip install graphviz即可

ID3算法生成决策树
源代码出处:https://github.com/fengdu78/lihang-code/blob/master/code/第5章 决策树(DecisonTree)/DT.ipynb

#定义节点类 二叉树
class Node:
    def __init__(self,root=True,label=None,feature_name=None,feature=None):
        self.root = root
        self.label = label
        self.feature_name = feature_name
        self.feature = feature
        self.tree = {}
        self.result = {'label:': self.label, 'feature': self.feature, 'tree': self.tree}
        
    def __repr__(self):
        return '{}'.format(self.result)
    
    def add_node(self,val,node):
        self.tree[val] = node
        
    def predict(self,features):
        if self.root is True:
            return self.label
        return self.tree[features[self.feature]].predict(features)

算法实现

def train(self,train_data):
        """
        input:数据集D(DataFrame格式),特征集A,阈值eta
        output:决策树T
        
        """
        _, y_train, features = train_data.iloc[:, :-1], train_data.iloc[:, -1], train_data.columns[:-1]
        # 1,若D中实例属于同一类Ck,则T为单节点树,并将类Ck作为结点的类标记,返回T
        if len(y_train.value_counts()) == 1:
            return Node(root=True,
                        label=y_train.iloc[0])

        # 2, 若A为空,则T为单节点树,将D中实例树最大的类Ck作为该节点的类标记,返回T
        if len(features) == 0:
            return Node(root=True, label=y_train.value_counts().sort_values(ascending=False).index[0])

        # 3,计算最大信息增益 同5.1,Ag为信息增益最大的特征
        max_feature, max_info_gain = self.info_gain_train(np.array(train_data))
        max_feature_name = features[max_feature]

        # 4,Ag的信息增益小于阈值eta,则置T为单节点树,并将D中是实例数最大的类Ck作为该节点的类标记,返回T
        if max_info_gain < self.epsilon:
            return Node(root=True, label=y_train.value_counts().sort_values(ascending=False).index[0])

        # 5,构建Ag子集
        node_tree = Node(root=False, feature_name=max_feature_name, feature=max_feature)

        feature_list = train_data[max_feature_name].value_counts().index
        for f in feature_list:
            sub_train_df = train_data.loc[train_data[max_feature_name] == f].drop([max_feature_name], axis=1)

            # 6, 递归生成树
            sub_tree = self.train(sub_train_df)
            node_tree.add_node(f, sub_tree)

        # pprint.pprint(node_tree.tree)
        return node_tree

    def fit(self, train_data):
        self._tree = self.train(train_data)
        return self._tree

    def predict(self, X_test):
        return self._tree.predict(X_test)

猜你喜欢

转载自blog.csdn.net/Hhhhuply/article/details/88553657