《统计学习方法》第四章: 朴素贝叶斯法 读书笔记


一切为了数据挖掘的准备

4.朴素贝叶斯法(naive Bayes)

朴素贝叶斯是基于贝叶斯定理与特征条件独立假设的分类方法

  • 生成模型
  • 判别模型
4.1朴素贝叶斯的学习与分类
4.1.1 基本方法
  • 表达:输入空间 X R n X \subseteq R^n ,输出空间 Y = c 1 , c 2 ,   , c K Y={c_1,c_2,\cdots,c_K} 。P(X,Y)是X和Y的联合分布。对条件概率分布作条件独立性假设,即 P ( X Y ) = P ( x 0 Y ) P ( x 1 Y ) P ( x n Y ) P(X|Y)=P(x^0|Y)P(x^1|Y)\cdots P(x^n|Y) ,在分类确定的条件下,用于分类的特征都是条件独立的。

  • 通过训练集T歇息联合概率分布P(X,Y)。

    • 学习先验概率分布: P ( Y = c k ) , k = 1 , 2 ,   , K P(Y=c_k),k=1,2,\cdots,K
    • 条件概率分布: P ( X = x Y = c k ) = P ( X ( 1 ) = x ( 1 ) , X ( 2 ) = x ( 2 ) ,   , X ( n ) = x ( n ) Y = c k ) = j = 1 n P ( X ( j ) = x ( j ) Y = c k ) P(X=x|Y=c_k)=P(X^{(1)}=x^{(1)}, X^{(2)}=x^{(2)},\cdots,X^{(n)}=x^{(n)}|Y=c_k)=\prod_{j=1}^nP(X^{(j)}=x^{(j)}|Y=c_k)
    • 联合概率分布: P ( X , Y ) = P ( X Y ) P ( Y ) P(X,Y)=P(X|Y)P(Y)
  • 通过学习到的模型计算后验概率分布
    P ( Y = c k X = x ) = P ( X = x Y = c k ) P ( Y = c k ) P ( X = x ) P(Y=c_k|X=x)=\frac{P(X=x|Y=c_k)P(Y=c_k)}{P(X=x)}

  • 朴素贝叶斯的分类器模型
    y = f ( x ) = a r g m a x c k P ( Y = c k ) j = 1 n P ( X ( j ) = x ( j ) Y = c k ) y=f(x) = argmax_{c_k}P(Y=c_k)\prod_{j=1}^nP(X^{(j)}=x^{(j)}|Y=c_k)

4.1.2 分类器模型证明
  • 选择损失函数:对于分类模型,一般选择0-1损失函数:
    L ( Y , f ( X ) ) = { 1 , Y f ( X ) 0 , Y = f ( X ) L(Y,f(X))=\begin{cases}1,& Y\neq f(X) \\ 0,& Y= f(X) \end{cases}

  • 期望风险:
    对于离散值期望为 x P ( x ) \sum xP(x)
    R e x p ( f ) = E [ L ( Y , f ( X ) ) ] = X Y L ( Y , f ( X ) ) P ( X , Y ) = R_{exp}(f) = E[L(Y,f(X))] = \sum_X\sum_YL(Y,f(X))P(X,Y)=
    X Y L ( Y , f ( X ) ) P ( Y X ) P ( X ) = X ( Y L ( Y , f ( X ) ) P ( Y X ) ) P ( X ) \sum_X\sum_YL(Y,f(X))P(Y|X)P(X)=\sum_X(\sum_YL(Y,f(X))P(Y|X))P(X)

  • 对想要期望风险最小化,对 Y L ( Y , f ( X ) ) P ( Y X ) \sum_YL(Y,f(X))P(Y|X) 逐个极小化
    m i n Y L ( Y , f ( X ) ) P ( Y X ) = m i n k L ( Y = c k , f ( X ) ) P ( Y = c k X ) min\sum_YL(Y,f(X))P(Y|X)= min\sum_kL(Y=c_k,f(X))P(Y=c_k|X)
    Y f ( x ) Y \neq f(x) f ( x ) c k f(x) \neq c_k 时, L ( Y = c k , f ( X ) ) L(Y=c_k,f(X)) 为1,所以上面公式等效于
    m i n k I ( f ( X ) c k ) P ( Y = c k X ) = m i n k ( 1 I ( f ( X ) = c k ) ) P ( Y = c k X ) = min\sum_kI(f(X) \neq c_k)P(Y=c_k|X) = min\sum_k(1-I(f(X) = c_k))P(Y=c_k|X)=
    m i n { k P ( Y = c k X ) k I ( f ( X ) = c k ) P ( Y = c k X ) } = m a x k I ( f ( X ) = c k ) P ( Y = c k X ) min \{\sum_kP(Y=c_k|X)-\sum_kI(f(X) = c_k)P(Y=c_k|X)\} =max\sum_kI(f(X) = c_k)P(Y=c_k|X)

4.2参数估计
4.2.1 极大似然估计
  • P ( Y = c k ) = i = 1 I ( y i = c k ) N P(Y=c_k)=\frac{\sum_{i=1}I(y_i=c_k)}{N}
    输出空间 Y = c 1 , c 2 ,   , c K Y={c_1,c_2,\cdots,c_K} ,假设参数为 θ \theta 假设每个值对应概率为 θ 1 , θ 2 ,   , θ K \theta_1,\theta_2,\cdots, \theta_K ,即 P ( Y = c i θ ) = θ i P(Y=c_i|\theta)=\theta_i ,且满足 s t : i = 1 K θ i = 1 st:\sum_{i=1}^K\theta_i=1
    证:
    m a x P ( y 1 y 2 y N θ ) = P ( y 1 θ ) P ( y 2 θ ) P ( y N θ ) = θ 1 m 1 θ 2 m 2 θ N m N max P(y_1y_2\cdots y_N|\theta)=P(y_1|\theta)P(y_2|\theta)\cdots P(y_N|\theta)=\theta_1^{m_1}\theta_2^{m_2}\cdots \theta_N^{m_N}
    m a x l n ( P ( y 1 y 2 y N θ ) ) = m 1 l n θ 1 + m 2 l n θ 2 + + m K l n θ K max ln(P(y_1y_2\cdots y_N|\theta))=m_1ln\theta_1 + m_2ln\theta_2 + \cdots + m_Kln\theta_K
    L = l n ( P ( y 1 y 2 y N θ ) ) + λ ( θ 1 + θ 2 + + θ K 1 ) = L=ln(P(y_1y_2\cdots y_N|\theta)) + \lambda(\theta_1 + \theta_2 + \cdots + \theta_K -1) =
    m 1 l n θ 1 + m 2 l n θ 2 + + m K l n θ K + λ ( θ 1 + θ 2 + + θ K 1 ) m_1ln\theta_1 + m_2ln\theta_2 + \cdots + m_Kln\theta_K + \lambda(\theta_1 + \theta_2 + \cdots + \theta_K -1)
    L θ i = m i θ i + λ = 0 , θ i = m i λ , \frac{\partial L}{\partial \theta_i} = \frac{m_i}{\theta_i} + \lambda=0, \theta_i=-\frac{m_i}{\lambda},
    K m i λ = 1 , λ = K m i = N , θ i = m i N \sum_K-\frac{m_i}{\lambda}=1,\lambda=-\sum_Km_i=-N,\theta_i=\frac{m_i}{N}

  • P ( X ( j ) = a Y = c k ) = i = 1 N I ( x ( j ) = a , y i = c k ) i = 1 N I ( y i = c k ) P(X^{(j)}=a|Y=c_k)=\frac{\sum_{i=1}^NI(x^{(j)}=a,y_i=c_k)}{\sum_{i=1}^NI(y_i=c_k)}

4.2.2贝叶斯估计

假设样本较少,有可能某些条件下计数为0,则计算出该条件下的先验概率为0,会影响到以后后验概率的结果。因此采用贝叶斯估计。
P ( Y = c k ) = i = 1 I ( y i = c k ) + λ N + K λ P(Y=c_k)=\frac{\sum_{i=1}I(y_i=c_k) + \lambda}{N + K\lambda}
P ( X ( j ) = a Y = c k ) = i = 1 N I ( x ( j ) = a , y i = c k ) + λ i = 1 N I ( y i = c k ) + S λ P(X^{(j)}=a|Y=c_k)= \frac{\sum_{i=1}^NI(x^{(j)}=a,y_i=c_k) + \lambda}{\sum_{i=1}^NI(y_i=c_k) + S\lambda}
其中K是分类结果的个数,S是第j维输入特征的取值个数。
证:
对于多项分布 Y = c 1 , c 2 ,   , c K Y={c_1,c_2,\cdots,c_K} ,先验概率分布为
π ( θ ) = γ ( α 1 + α 2 + + α K ) γ ( α 1 ) γ ( α 2 ) γ ( α K ) θ 1 α 1 1 θ 2 α 2 1 θ K α K 1 \pi(\theta)=\frac{\gamma(\alpha_1+\alpha_2+\cdots+\alpha_K)}{\gamma(\alpha_1)\gamma(\alpha_2)\cdots\gamma(\alpha_K)}\theta_1^{\alpha_1-1}\theta_2^{\alpha_2-1}\cdots \theta_K^{\alpha_K-1}
倾向于认为 α 1 = α 2 = = α K = α \alpha_1 = \alpha_2 = \cdots = \alpha_K=\alpha
P ( θ y 1 y N ) = P ( θ , y 1 y N ) P ( y 1 y N ) π ( θ ) P ( y 1 y N θ ) P(\theta|y_1\cdots y_N)=\frac{P(\theta,y_1\cdots y_N)}{P(y_1\cdots y_N)}\varpropto \pi(\theta)P(y_1\cdots y_N|\theta) \varpropto
θ 1 α 1 θ 2 α 1 θ K α 1 θ 1 m 1 θ 2 m 2 θ N m N \theta_1^{\alpha-1}\theta_2^{\alpha-1}\cdots \theta_K^{\alpha-1}\theta_1^{m_1}\theta_2^{m_2}\cdots \theta_N^{m_N}
L = θ 1 m 1 + α 1 θ 2 m 2 + α 1 θ K m k + α 1 L = \theta_1^{m_1+\alpha-1}\theta_2^{m_2+\alpha-1}\cdots \theta_K^{m_k+\alpha-1}
L 1 = l n L + λ ( θ 1 + + θ K 1 ) = { K ( m i + α 1 ) l n θ i + λ θ i } 1 L_1 = lnL+\lambda(\theta_1 + \cdots + \theta_K -1)=\{\sum_K(m_i+\alpha-1)ln\theta_i+\lambda\theta_i\} -1
L θ i = m i + α 1 θ i + λ = 0 , θ i = m i + α 1 λ , \frac{\partial L}{\partial \theta_i} = \frac{m_i+\alpha-1}{\theta_i} + \lambda=0, \theta_i=-\frac{m_i+\alpha-1}{\lambda},
K m i + α 1 λ = 1 , λ = K ( m i + α 1 ) = N K ( α 1 ) , \sum_K-\frac{m_i+\alpha-1}{\lambda}=1,\lambda=-\sum_K(m_i+\alpha-1)=-N-K(\alpha-1),
θ i = m i + α 1 N + K ( α 1 ) \theta_i=\frac{m_i + \alpha-1}{N+K(\alpha-1)}

4.3 我的实现
import numpy as np
import pandas as pd
from functools import reduce
from pandas import Series,DataFrame
class naive_bayes:
    def __init__(self,X,Y):
        """
        xclass是个二维数组每行数据对应输入空间X的对应维度的所有取值,
        yclass是输出空间所有取值
        """
        self.X = X
        self.Y = Y
        self.model = self.training()
        
    def training(self):
        data = DataFrame(self.X)
        data['yvalue'] = self.Y
        # 计算每个y分类的数目
        d_y = data['yvalue'].value_counts()
        # p(y) = y分类数目/总数目
        m_y = d_y/(len(data))
        # p(x|y),m_xy最终结构,第一层index是各特征的index,第二层index是各特征的取值,
        #columns是y分类
        m_xy = DataFrame([],columns = data['yvalue'].unique())
        for i in range(len(data.columns)-1):
        	#按照y和第i个特征计数
            d_xy = pd.crosstab(data[i],data['yvalue'])
            # p(x|y)
            d_xy = d_xy/d_y
            d_xy['feature'] = i
            m_xy = pd.concat([m_xy,d_xy])
        m_xy.set_index(['feature',m_xy.index],inplace=True)
        return {'m_y':m_y,'m_xy':m_xy}
    
    def predict(self,x):
    	# 最终y分类
        py = ''
        # y分类对应的最大概率
        maxp = 0
        # 循环y分类,计算概率
        for y in self.model['m_y'].index:
            p = self.model['m_y'][y]
            for i_feature in range(len(x)):
                p*=self.model['m_xy'].loc[(i_feature,x[i_feature]),y]
            if maxp < p:
                py = y
                maxp = p
        print('预测结果为:',py,',得到概率为:',maxp)
        
x=[[1,'s'],[1,'m'],[1,'m'],[1,'s'],[1,'s'],[2,'s'],[2,'m'],[2,'m'],[2,'l'],[2,'l'],[3,'l'],[3,'m'],[3,'m'],[3,'l'],[3,'l']]
y=[-1,-1,1,1,-1,-1,-1,1,1,1,1,1,1,1,-1]             
n = naive_bayes(x,y)
n.predict([2,'s'])

猜你喜欢

转载自blog.csdn.net/liuerin/article/details/89313282