第4章 朴素贝叶斯(文本分类、过滤垃圾邮件、获取区域倾向)

贝叶斯定理:

P ( c x ) = P ( c ) P ( x c ) P ( x ) P(c|x)=\frac{P(c)P(x|c)}{P(x)} P ( c ) P(c) 是类“先验(prior)”概率; P ( x c ) P(x|c) 是样本 x x 相对于类标记 c c 的类条件概率(class-conditional probability),或称为“似然”(likelihood)。

朴素(naive)的含义:假设特征之间相互独立(属性条件独立性假设);每个特征同等重要。
尽管上述假设存在一些小的瑕疵,但朴素贝叶斯的实际效果却很好。

P ( c x ) = P ( c ) P ( x c ) P ( x ) = P ( c ) P ( x ) i = 1 d P ( x i c ) P(c|x)=\frac{P(c)P(x|c)}{P(x)}=\frac{P(c)}{P(x)}\prod_{i=1}^{d}P(x_i|c)
d d 为属性数目, x i x_i x x 在第 i i 个属性上的取值。

对离散属性,条件概率可估计为:
P ( x i c ) = D c , x i D c P(x_i|c)=\frac{|D_{c,x_i}|}{|D_c|}
D c D_c 表示训练集 D D 中第 c c 类样本组成的集合, D c , x i D_{c,x_i} 表示 D c D_c 中在第 i i 个属性上取值为 x i x_i 的样本组成的集合。

对连续属性,条件概率可考虑概率密度函数:
P ( x i c ) = 1 2 π σ c , i e x p ( ( x i μ c , i ) 2 2 σ c , i 2 ) P(x_i|c)=\frac{1}{\sqrt{2\pi}\sigma_{c,i}}exp\left({-\frac{(x_i-\mu_{c,i})^{2}}{2\sigma_{c,i}^{2}}}\right)
假定 P ( x i c ) N ( μ c , i , σ c , i 2 ) P(x_i|c)\sim N(\mu_{c,i},\sigma_{c,i}^2) ,其中 μ c , i , σ c , i 2 \mu_{c,i},\sigma_{c,i}^2 分别是第 c c 类样本在第 i i 个属性上取值的均值和方差。

待学习内容:半朴素贝叶斯分类器(独依赖估计ODE)(超父独依赖估计Super-Parent ODE)、TAN(基于最大生成树算法)、AODE(基于集成学习、更强大的ODE);贝叶斯网(借助有向无环图DAG来刻画属性之间的依赖关系…)

参考《机器学习 周志华》

优点:在数据较少的情况下仍然有效,可处理多类别问题
缺点:对于输入数据的准备方式较为敏感
适用数据类型:标称型数据

判断文档是否属于侮辱类:

基于词集(只考虑是否出现某一单词)的训练算法:

from numpy import *

def loadDataSet():
    postingList=[['my', 'dog', 'has', 'flea', 'problems', 'help', 'please'],
                 ['maybe', 'not', 'take', 'him', 'to', 'dog', 'park', 'stupid'],
                 ['my', 'dalmation', 'is', 'so', 'cute', 'I', 'love', 'him'],
                 ['stop', 'posting', 'stupid', 'worthless', 'garbage'],
                 ['mr', 'licks', 'ate', 'my', 'steak', 'how', 'to', 'stop', 'him'],
                 ['quit', 'buying', 'worthless', 'dog', 'food', 'stupid']]
    classVec = [0,1,0,1,0,1]    #1 is abusive, 0 not
    return postingList,classVec
                 
def createVocabList(dataSet):
    vocabSet = set([])  #create empty set
    for document in dataSet:
        vocabSet = vocabSet | set(document) #union of the two sets
    return list(vocabSet)

def setOfWords2Vec(vocabList, inputSet):
    returnVec = [0]*len(vocabList)
    for word in inputSet:
        if word in vocabList:
            returnVec[vocabList.index(word)] = 1
        else: print "the word: %s is not in my Vocabulary!" % word
    return returnVec

def trainNB0(trainMatrix,trainCategory):
    numTrainDocs = len(trainMatrix)
    numWords = len(trainMatrix[0])
    pAbusive = sum(trainCategory)/float(numTrainDocs)
    p0Num = zeros(numWords); p1Num = zeros(numWords)   #change to ones() 
    p0Denom = 0.0; p1Denom = 0.0                       #change to 2.0                   
    for i in range(numTrainDocs):
        if trainCategory[i] == 1:
            p1Num += trainMatrix[i]
            p1Denom += sum(trainMatrix[i])
        else:
            p0Num += trainMatrix[i]
            p0Denom += sum(trainMatrix[i])
    p1Vect = p1Num/p1Denom          #change to log()
    p0Vect = p0Num/p0Denom          #change to log()
    return p0Vect,p1Vect,pAbusive

cmd运行结果:

C:\Users\Qiuyi>cd C:\Users\Qiuyi\eclipse-workspace\ML_inAction\Ch04

C:\Users\Qiuyi\eclipse-workspace\ML_inAction\Ch04>python
Python 2.7.14 (v2.7.14:84471935ed, Sep 16 2017, 20:25:58) [MSC v.1500 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from numpy import *
>>> import bayes
>>> listOPosts,listClasses = bayes.loadDataSet()
>>> myVocabList = bayes.createVocabList(listOPosts)
>>> trainMat=[]
>>> for postinDoc in listOPosts:
...     trainMat.append(bayes.setOfWords2Vec(myVocabList,postinDoc))
...
>>> listOPosts
[['my', 'dog', 'has', 'flea', 'problems', 'help', 'please'], ['maybe', 'not', 'take', 'him', 'to', 'dog', 'park', 'stupid'], ['my', 'dalmation', 'is', 'so', 'cute', 'I', 'love', 'him'], ['stop', 'posting', 'stupid', 'worthless', 'garbage'], ['mr', 'licks', 'ate', 'my', 'steak', 'how', 'to', 'stop', 'him'], ['quit', 'buying', 'worthless', 'dog', 'food', 'stupid']]
>>> listClasses
[0, 1, 0, 1, 0, 1]
>>> myVocabList
['cute', 'love', 'help', 'garbage', 'quit', 'I', 'problems', 'is', 'park', 'stop', 'flea', 'dalmation', 'licks', 'food', 'not', 'him', 'buying', 'posting', 'has', 'worthless', 'ate', 'to', 'maybe', 'please', 'dog', 'how', 'stupid', 'so', 'take', 'mr', 'steak', 'my']
>>> trainMat
[[0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1],
......, 
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0]]
>>> p0V,p1V,pAb=bayes.trainNB0(trainMat,listClasses)
>>>> pAb
0.5
>>> p0V
array([0.04166667, 0.04166667, 0.04166667, 0.        , 0.        ,
       0.04166667, 0.04166667, 0.04166667, 0.        , 0.04166667,
	   ......
       0.04166667, 0.        , 0.04166667, 0.        , 0.04166667,
       0.04166667, 0.125     ])
>>> p1V
array([0.        , 0.        , 0.        , 0.05263158, 0.05263158,
       0.        , 0.        , 0.        , 0.05263158, 0.05263158,
	   ......
       0.        , 0.15789474, 0.        , 0.05263158, 0.        ,
       0.        , 0.        ])

p1V数组中第26个下标位置,大小为0.15789474,是数组中最大值。在myVocabList的第26个下标位置可查到该单词是stupid,这以为着stupid是最能表征类别1的单词。

改进版文本分类器:

为避免其他属性携带的信息被训练集中未出现的属性值“抹去”,在估计概率值时通常要进行“平滑”(smoothing),常用“拉普拉斯修正”(Laplacian correction)。

要计算多个概率的乘积,为防止某一概率值为0使最后乘积为0,将所有词的出现数初始化为1,并将分母初始化为N,N表示训练集中可能的类别数。

为防止多个很小的数连乘造成下溢出或四舍五入得0,采用log函数,取自然对数即可,采用对数似然(log-likelihood):

p0Num = ones(numWords); p1Num = ones(numWords) #change to ones()
p0Denom = 2.0; p1Denom = 2.0 #change to 2.0
p0Vect = log(p0Num/p0Denom) #change to log()
p1Vect = log(p1Num/p1Denom) #change to log()

def classifyNB(vec2Classify, p0Vec, p1Vec, pClass1):
    p1 = sum(vec2Classify * p1Vec) + log(pClass1)    #element-wise mult
    p0 = sum(vec2Classify * p0Vec) + log(1.0 - pClass1)
    if p1 > p0:
        return 1
    else: 
        return 0
        
def testingNB():
    listOPosts,listClasses = loadDataSet()
    myVocabList = createVocabList(listOPosts)
    trainMat=[]
    for postinDoc in listOPosts:
        trainMat.append(setOfWords2Vec(myVocabList, postinDoc))
    p0V,p1V,pAb = trainNB0(array(trainMat),array(listClasses))  #修改后的trainNB0()
    testEntry = ['love', 'my', 'dalmation']
    thisDoc = array(setOfWords2Vec(myVocabList, testEntry))
    print testEntry,'classified as: ',classifyNB(thisDoc,p0V,p1V,pAb)
    testEntry = ['stupid', 'garbage']
    thisDoc = array(setOfWords2Vec(myVocabList, testEntry))
    print testEntry,'classified as: ',classifyNB(thisDoc,p0V,p1V,pAb)

cmd运行结果:

>>> reload(bayes)
<module 'bayes' from 'bayes.py'>
>>> bayes.testingNB()
['love', 'my', 'dalmation'] classified as:  0
['stupid', 'garbage'] classified as:  1

词袋模型(bag-of-words model)

setOfWords2Vec 改进为 bagOfWords2VecMN。在词袋中每个单词可出现多次,而在词集中每个词只能出现一次。词袋模型在解决文档分类问题上比词集模型有所提高。

def bagOfWords2VecMN(vocabList, inputSet):
    returnVec = [0]*len(vocabList)
    for word in inputSet:
        if word in vocabList:
            returnVec[vocabList.index(word)] += 1
            #词集:returnVec[vocabList.index(word)] = 1
    return returnVec

基于词袋的朴素贝叶斯垃圾邮件过滤

使用朴素贝叶斯解决一些现实生活中的问题时,需要先从文本内容中得到字符串列表,然后生成词向量。

可用正则表达式切分字符串,r’\W*'即除单词、数字外的任意字符串,并去掉少于两个字符的字符串。所有单词改成小写,使形式一致。如果是句子查找,则首字母大写这个特点很有用。

本例中共有50封邮件,随机选10封作为测试邮件,剩下的作为训练集——留存交叉验证(hold-out cross validation)。最后计算的是邮件被错误分类的概率。

def textParse(bigString):    #input is big string, #output is word list
    import re
    listOfTokens = re.split(r'\W*', bigString)
    return [tok.lower() for tok in listOfTokens if len(tok) > 2] 
    
def spamTest():
    docList=[]; classList = []; fullText =[]
    for i in range(1,26):
        wordList = textParse(open('email/spam/%d.txt' % i).read())
        docList.append(wordList)
        fullText.extend(wordList)   #可去掉
        classList.append(1)
        wordList = textParse(open('email/ham/%d.txt' % i).read())
        docList.append(wordList)
        fullText.extend(wordList)   #可去掉
        classList.append(0)
    vocabList = createVocabList(docList)          #create vocabulary
    trainingSet = range(50); testSet=[]           #create test set
    for i in range(10):
        randIndex = int(random.uniform(0,len(trainingSet)))
        testSet.append(trainingSet[randIndex])
        del(trainingSet[randIndex])  
    trainMat=[]; trainClasses = []
    for docIndex in trainingSet:     #train the classifier (get probs) trainNB0
        trainMat.append(bagOfWords2VecMN(vocabList, docList[docIndex]))
        trainClasses.append(classList[docIndex])
    p0V,p1V,pSpam = trainNB0(array(trainMat),array(trainClasses))
    errorCount = 0
    for docIndex in testSet:        #classify the remaining items
        wordVector = bagOfWords2VecMN(vocabList, docList[docIndex])
        if classifyNB(array(wordVector),p0V,p1V,pSpam) != classList[docIndex]:
            errorCount += 1
            print "classification error",docList[docIndex]
    print 'the error rate is: ',float(errorCount)/len(testSet)
    #return vocabList,fullText  #fullText有几百项

运行结果:

>>> bayes.spamTest()
classification error ['yay', 'you', 'both', 'doing', 'fine', 'working', 'mba', 'design', 'strategy', 'cca', 'top', 'art', 'school', 'new', 'program', 'focusing', 'more', 'right', 'brained', 'creative', 'and', 'strategic', 'approach', 'management', 'the', 'way', 'done', 'today']
the error rate is:  0.1
>>> bayes.spamTest()
classification error ['home', 'based', 'business', 'opportunity', 'knocking', 'your', 'door', 'don', 'rude', 'and', 'let', 'this', 'chance', 'you', 'can', 'earn', 'great', 'income', 'and', 'find', 'your', 'financial', 'life', 'transformed', 'learn', 'more', 'here', 'your', 'success', 'work', 'from', 'home', 'finder', 'experts']
the error rate is:  0.1
>>> bayes.spamTest()
the error rate is:  0.0
>>> bayes.spamTest()
classification error ['yeah', 'ready', 'may', 'not', 'here', 'because', 'jar', 'jar', 'has', 'plane', 'tickets', 'germany', 'for']
the error rate is:  0.1
>>> bayes.spamTest()
the error rate is:  0.0

参考网址:

正则表达式:
http://www.runoob.com/python/python-reg-expressions.html

extend (扩展) 与 append (追加) 的差别:
https://justjavac.iteye.com/blog/1827915

>>> li = ['a', 'b', 'c']
>>> li.extend(['d', 'e', 'f']) 
>>> li
['a', 'b', 'c', 'd', 'e', 'f']
>>> len(li)                    
6
>>> li[-1]
'f'
>>> li = ['a', 'b', 'c']
>>> li.append(['d', 'e', 'f']) 
>>> li
['a', 'b', 'c', ['d', 'e', 'f']]
>>> len(li)                    
4
>>> li[-1]
['d', 'e', 'f']

从个人广告中获取区域倾向

http://newyork.craigslist.org/stp/index.rss 已经无法访问了

>>> ny=feedparser.parse('http://newyork.craigslist.org/stp/index.rss')
>>> ny['entries']
[]

参考网址:

python 中 feedparser的简单用法:
feedparser是python中最常用的RSS程序库,使用它我们可轻松地实现从任何 RSS 或 Atom 订阅源得到标题、链接和文章的条目。
https://blog.csdn.net/lilong117194/article/details/77323673

猜你喜欢

转载自blog.csdn.net/weixin_34275246/article/details/85065982