《机器学习实战Machine_Learning_in_Action》 CH04- 朴素贝叶斯

总结:

贝叶斯决策理论的核心思想是选择高概率对应的类别,即选择具有最高概率的决策。忽略P(X)。

运用贝叶斯定理

在这里插入图片描述
分类算法忽略P(X)而比较其他概率的高低,从而做出分类判断。

算法实现

下面做一个简单的留言板分类,自动判别留言类别:侮辱类和非侮辱类,分别使用1和0表示。下面来做一下这个实验。以下函数全部写在一个叫bayes.py文件中,后面的实验室通过导入bayes.py,调用里面的函数来做的。

1.加载数据集

该函数返回的是词条切分集合和类标签。

def loadDataSet():
    postingList=[['my', 'dog', 'has', 'flea', 'problems', 'help', 'please'],
                 ['maybe', 'not', 'take', 'him', 'to', 'dog', 'park', 'stupid'],
                 ['my', 'dalmation', 'is', 'so', 'cute', 'I', 'love', 'him'],
                 ['stop', 'posting', 'stupid', 'worthless', 'garbage'],
                 ['mr', 'licks', 'ate', 'my', 'steak', 'how', 'to', 'stop', 'him'],
                 ['quit', 'buying', 'worthless', 'dog', 'food', 'stupid']]
    classVec = [0,1,0,1,0,1]    #1 is abusive, 0 not
    return postingList,classVec      

2.根据样本创建一个词库

下面的函数是根据上面给出来的样本数据所创建出来的一个词库。

def createVocabList(dataSet):
    vocabSet = set([])  #create empty set
    for document in dataSet:
        vocabSet = vocabSet | set(document) #union of the two sets
    return list(vocabSet)

3.统计每个样本在词库中的出现情况

下面的函数功能是把单个样本映射到词库中去,统计单个样本在词库中的出现情况,1表示出现过,0表示没有出现,函数如下:

def setOfWords2Vec(vocabList, inputSet):
    returnVec = [0]*len(vocabList)
    for word in inputSet:
        if word in vocabList:
            returnVec[vocabList.index(word)] = 1
        else: print ("the word: %s is not in my Vocabulary!" % word)
    return returnVec

4.计算条件概率和类标签概率

def trainNB0(trainMatrix,trainCategory):
    numTrainDocs = len(trainMatrix)
    numWords = len(trainMatrix[0])
    pAbusive = sum(trainCategory)/float(numTrainDocs) #计算某个类发生的概率
    p0Num = ones(numWords); p1Num = ones(numWords) #初始样本个数为1,防止条件概率为0,影响结果       
    p0Denom = 2.0; p1Denom = 2.0  #作用同上                      
    for i in range(numTrainDocs):
        if trainCategory[i] == 1:
            p1Num += trainMatrix[i]
            p1Denom += sum(trainMatrix[i])
        else:
            p0Num += trainMatrix[i]
            p0Denom += sum(trainMatrix[i])
    p1Vect = log(p1Num/p1Denom)         #计算类标签为1时的其它属性发生的条件概率
    p0Vect = log(p0Num/p0Denom)         #计算标签为0时的其它属性发生的条件概率
    return p0Vect,p1Vect,pAbusive       #返回条件概率和类标签为1的概率

5.训练贝叶斯分类算法

该算法包含四个输入,vec2Classify表示待分类的样本在词库中的映射集合,p0Vec表示条件概率P(wi|c=0),p1Vec表示条件概率P(wi|c=1),pClass1表示类标签为1时的概率P(c=1)。

def classifyNB(vec2Classify, p0Vec, p1Vec, pClass1):
    p1 = sum(vec2Classify * p1Vec) + log(pClass1)    #element-wise mult
    p0 = sum(vec2Classify * p0Vec) + log(1.0 - pClass1)
    if p1 > p0:
        return 1
    else: 
        return 0

其中p1和p0表示的是lnp(w1|c=1)p(w2|c=1)…p(wn|c=1)∗p(c=1)和lnp(w1|c=0)p(w2|c=0)…p(wn|c=0)∗p(c=0),取对数是因为防止p(w_1|c=1)p(w_2|c=1)p(w_3|c=1)…p(w_n|c=1)多个小于1的数相乘结果值下溢。

6.文档词袋模型,修改函数setOfWords2Vec

词袋模型主要修改上面的第三个步骤,因为有的词可能出现多次,所以在单个样本映射到词库的时候需要多次统计。

def bagOfWords2VecMN(vocabList, inputSet):
    returnVec = [0]*len(vocabList)
    for word in inputSet:
        if word in vocabList:
            returnVec[vocabList.index(word)] += 1
    return returnVec

7.测试函数

#step1:加载数据集和类标号

# 导入数据
listOPosts,listClasses = bayes.loadDataSet()

print(listOPosts)
print("----------------")
print(listClasses)

#[['my', 'dog', 'has', 'flea', 'problems', 'help', 'please'], ['maybe', 'not', 'take', 'him', 'to', 'dog', 'park', 'stupid'], ['my', 'dalmation', 'is', 'so', 'cute', 'I', 'love', 'him'], ['stop', 'posting', 'stupid', 'worthless', 'garbage'], ['mr', 'licks', 'ate', 'my', 'steak', 'how', 'to', 'stop', 'him'], ['quit', 'buying', 'worthless', 'dog', 'food', 'stupid']]
#----------------
#[0, 1, 0, 1, 0, 1]

#step2:创建词库

# 求出总词集myVocabList
myVocabList = bayes.createVocabList(listOPosts)

myVocabList
#['mr',
# 'to',
# 'licks',
# 'worthless',
# 'buying',
# 'garbage',
# 'ate',
# 'take',
# 'not',
# 'how',
# 'please',
# 'is',
# 'steak',
# 'quit',
# 'so',
# 'has',
# 'stop',
# 'flea',
# 'problems',
# 'stupid',
# 'help',
# 'cute',
# 'posting',
# 'I',
# 'love',
# 'park',
# 'my',
# 'dog',
# 'dalmation',
# 'food',
# 'maybe',
# 'him']

#step3:计算每个样本在词库中的出现情况

# 求出训练集trainMat
trainMat=[]
for postinDoc in listOPosts:
    trainMat.append(bayes.setOfWords2Vec(myVocabList, postinDoc))
    
trainMat
#[[0,
#  0,
#  .....
#  0,
#  0],
#  .....
#  [0,
#  0,
#  .....
#  1,
#  0]]

#step4:调用第四步函数,计算条件概率

# 训练模型
p0V,p1V,pAb = bayes.trainNB0(array(trainMat),array(listClasses))

print(p0V)
print("----------------")
print(p1V)
print("----------------")
print(pAb)

#[-2.56494936 -2.56494936 -2.56494936 -3.25809654 -3.25809654 -3.25809654
# -2.56494936 -3.25809654 -3.25809654 -2.56494936 -2.56494936 -2.56494936
# -2.56494936 -3.25809654 -2.56494936 -2.56494936 -2.56494936 -2.56494936
# -2.56494936 -3.25809654 -2.56494936 -2.56494936 -3.25809654 -2.56494936
# -2.56494936 -3.25809654 -1.87180218 -2.56494936 -2.56494936 -3.25809654
# -3.25809654 -2.15948425]
#----------------
#[-3.04452244 -2.35137526 -3.04452244 -1.94591015 -2.35137526 -2.35137526
# -3.04452244 -2.35137526 -2.35137526 -3.04452244 -3.04452244 -3.04452244
# -3.04452244 -2.35137526 -3.04452244 -3.04452244 -2.35137526 -3.04452244
# -3.04452244 -1.65822808 -3.04452244 -3.04452244 -2.35137526 -3.04452244
# -3.04452244 -2.35137526 -3.04452244 -1.94591015 -3.04452244 -2.35137526
# -2.35137526 -2.35137526]
#----------------
#0.5

#step5

# 测试模型1
testEntry = ['love', 'my', 'dalmation']
thisDoc = array(bayes.setOfWords2Vec(myVocabList, testEntry))
print(thisDoc)
print("----------------")
print (testEntry,'classified as: ',bayes.classifyNB(thisDoc,p0V,p1V,pAb))

#[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0 0 0]
#----------------
#['love', 'my', 'dalmation'] classified as:  0
# 测试模型2
testEntry = ['stupid', 'garbage']
thisDoc = array(bayes.setOfWords2Vec(myVocabList, testEntry))
print(thisDoc)
print("----------------")
print (testEntry,'classified as: ',bayes.classifyNB(thisDoc,p0V,p1V,pAb))

#[0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0]
#----------------
#['stupid', 'garbage'] classified as:  1

#PS:拆解classifyNB(vec2Classify, p0Vec, p1Vec, pClass1)
#比较sum(p0VthisDoc)和sum(p1VthisDoc)

p0V
#array([-2.56494936, -2.56494936, -2.56494936, -3.25809654, -3.25809654,
#       -3.25809654, -2.56494936, -3.25809654, -3.25809654, -2.56494936,
#       -2.56494936, -2.56494936, -2.56494936, -3.25809654, -2.56494936,
#       -2.56494936, -2.56494936, -2.56494936, -2.56494936, -3.25809654,
#       -2.56494936, -2.56494936, -3.25809654, -2.56494936, -2.56494936,
#       -3.25809654, -1.87180218, -2.56494936, -2.56494936, -3.25809654,
#       -3.25809654, -2.15948425])

thisDoc
#array([0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,
#       0, 0, 0, 0, 0, 0, 0, 0, 0, 0])

p0V*thisDoc
#array([-0.        , -0.        , -0.        , -0.        , -0.        ,
#       -3.25809654, -0.        , -0.        , -0.        , -0.        ,
#       -0.        , -0.        , -0.        , -0.        , -0.        ,
#       -0.        , -0.        , -0.        , -0.        , -3.25809654,
#       -0.        , -0.        , -0.        , -0.        , -0.        ,
#       -0.        , -0.        , -0.        , -0.        , -0.        ,
#       -0.        , -0.        ])

sum(p0V*thisDoc)
#-6.516193076042964

sum(p1V*thisDoc)
#-4.00960333376701

猜你喜欢

转载自blog.csdn.net/m0_46629123/article/details/110198101