MMR(最大边界相关算法)

最大边界相关算法

用于计算查询文本和搜索文档之间的相似度,然后对文档进行排序。算法公式为 MMR ( Q , C , R ) = A r g max d i , i n , c k [ λ s i m ( Q , d i ) ( a λ ) max d j , k ( s i m ( d i , d j ) ) ] \text{MMR}(Q,C,R) = Arg \max_{d_{i},in,c}^k [\lambda sim(Q,d_{i}) - (a- \lambda) \max_{d_{j},k} (sim(d_{i},d_{j}))]

其中Q指查询文本,C指搜索文档集合,R为一个已经求得的以相关度为基础的初始集合, A r g max d i , i n , c k Arg \max_{d_{i},in,c}^k 表示搜索返回的K个句子的索引。

具体到文本摘要生成任务中,Q和C表示整篇文档, d i d_{i} 表示文档中的某个句子,[ ]中第一项表示文档中某个句子和整篇文档的相似度,后一项表示文档中的某个句子和已经抽取的摘要句子的相似度。通过这样的方式,希望抽取的句子既能表达整个文档的含义,又可以具备多样性, λ \lambda 表示两者重要性的权衡。

通过MMR算法可以实现文档按重要性进行句子的抽取组成摘要,其中相关的相似性度量方式有很多,比如TF-IDF、余弦相似度、欧式距离或是使用神经网络模型进行评判。下面我们就来看一下如何使用TF-IDF和余弦相似度集合MMR算法来完成简单的抽取式摘要任务。

  • TF-IDF + MMR

TF:用于估计一个词在一个文档中的重要程度

TF ( w , d ) = n w , d u { w d } n u , d \text{TF}(w,d) = \frac{n_{w,d}}{\sum_{u \in \{ w_{d}\}}n_{u,d}}

其中 n w , d n_{w,d} 表示词w在文档d中出现的次数, { w d } \{w_{d}\} 表示文档d中所有词的集合

IDF:表示逆文档频率

IDF ( w , d ) = log ( n n w ) \text{IDF}(w,d) = \log(\frac{n}{n_{w}})

其中 n w n_{w} 表示包含词w的文档数目

TF-IDF

扫描二维码关注公众号,回复: 9095567 查看本文章

TF-IDF ( w , d ) = TF ( w , d ) IDF ( w , d ) \text{TF-IDF}(w,d) = \text{TF}(w,d) * \text{IDF}(w,d)

def TFs(sentences):
    tfs = dict() # 记录文档的 词-TF
    
    for sent in sentences:
        sent = sent.split(" ")
        preprowords, wordFreqs = preprowords_and_wordFreqs(sent)
    
        for word,value in wordFreqs.items():       
            if tfs.get(word,0) == 0: # 若此时tfs中没有word,词频计数即为 wordFreqs 中对应的值
                tfs[word] = wordFreqs[word]
            else:
                tfs[word] = tfs[word] + wordFreqs[word]  # 否则将两个字典中对应的值相加
                
    return tfs
def IDFs(sentences):
    N = len(sentences)
    idf = 0
    idfs,words, = dict(), dict()
    w2 = list()
    
    for sent in sentences:
        sent = sent.split(" ")
        preprowords, wordFreqs = preprowords_and_wordFreqs(sent)
        
        for word in preprowords:
            if wordFreqs.get(word,0) != 0:
                words[word] = words.get(word,0) + 1
    
    for word in words:
        n = words[word]
        
        try:
            w2.append(n)
            idf = math.log10(float(N) / n)
        except ZeroDivisionError:
            idf = 0
            
        idfs[word] = idf
        
    return idfs
def TF_IDF(sentences):
    tfs = TFs(sentences)
    idfs = IDFs(sentences)
    
    retval = dict()
    
    for word,value in tfs.items():    
        tf_idfs = tfs[word] * idfs[word]
        if retval.get(tf_idfs,None) == None:
            retval[tf_idfs] = [word]
        else:
            retval[tf_idfs].append(word)
            
    return retval

在知道了如何计算TF-IDF值之后,我们就需要明白如何使用它来进行句子间相似性的度量。

def sentenceSim(sent1, sent2, IDF_w):
    
    numerator = 0
    denominator = 0

    sent1 = sent1.split(" ")
    preprowords1, wordFreqs1 = preprowords_and_wordFreqs(sent1)
    
    sent2 = sent2.split(" ")
    preprowords2, wordFreqs2 = preprowords_and_wordFreqs(sent2)

    
    for word in preprowords2:
        numerator += wordFreqs1.get(word,0) * wordFreqs2.get(word,0) * IDF_w.get(word,0) ** 2
        
    for word in preprowords1:
        denominator += (wordFreqs2.get(word,0) * IDF_w.get(word,0)) ** 2
    try:
        return numerator / math.sqrt(denominator)
    except ZeroDivisionError:
        return float("-inf")   

MMR值的计算:

def MMRScore(Si, query, Sj, lambta, IDF):
    Sim1 = sentenceSim(Si, query, IDF)
    l_expr = lambta * Sim1
    value = [float("-inf")]

    for sent in Sj:
        Sim2 = sentenceSim(Si, sent, IDF)
        value.append(Sim2)

    r_expr = (1-lambta) * max(value)
    MMR_SCORE = l_expr - r_expr	

    return MMR_SCORE
  • 余弦相似度

简单的可以使用scikit-learn中的CountVectorizer完成句子的表示,当然可以使用预训练的词嵌入向量模型得到关于句子更好的表示;使用cosine_similarity来进行余弦相似度的计算。

def calculateSimilarity(sentence, doc):
    if doc == []:  
        return 0  
    vocab = {}  
    for word in sentence.split():  
        vocab[word] = 0
      
    docInOneSentence = '';  
    for t in doc:  
        docInOneSentence += (t + ' ')  
        for word in t.split():  
            vocab[word]=0    
      
    cv = CountVectorizer(vocabulary=vocab.keys())  
  
    docVector = cv.fit_transform([docInOneSentence])  
    sentenceVector = cv.fit_transform([sentence])  
    return cosine_similarity(docVector, sentenceVector)[0][0] 

完整的实现代码


# coding: utf-8

from __future__ import absolute_import,print_function,unicode_literals,division

import collections
import unicodedata
import re
import os
import nltk
from nltk.stem.porter import PorterStemmer
import math


with open('article.txt','r') as f:  # 可替换为任意的文本文件
    article = f.read()
    
def unicode_to_ascii(text):
    return ''.join(c for c in unicodedata.normalize('NFD',text) if unicodedata.category(c) != 'Mn')


# 文本预处理
def process_text(text):
    
    #text = unicode_to_ascii(text.lower().strip())
    # create a space between a word and the punctuation following it
    text = re.sub(r"([?.!,¿])", r" \1 ", text)
    text = re.sub(r'[" "]+', " ", text)
    # replacing everything with space except (a-z, A-Z, ".", "?", "!", ",")
    text = re.sub(r"[^a-zA-Z?.!¿,]+", " ", text)
    text = text.replace(',',' ')
    text = text.replace('\n',' ')
    text = text.strip()
    
    return text.lower()

STOPWORDS = frozenset([
    'all', 'six', 'just', 'less', 'being', 'indeed', 'over', 'move', 'anyway', 'four', 'not', 'own', 'through',
    'using', 'fifty', 'where', 'mill', 'only', 'find', 'before', 'one', 'whose', 'system', 'how', 'somewhere',
    'much', 'thick', 'show', 'had', 'enough', 'should', 'to', 'must', 'whom', 'seeming', 'yourselves', 'under',
    'ours', 'two', 'has', 'might', 'thereafter', 'latterly', 'do', 'them', 'his', 'around', 'than', 'get', 'very',
    'de', 'none', 'cannot', 'every', 'un', 'they', 'front', 'during', 'thus', 'now', 'him', 'nor', 'name', 'regarding',
    'several', 'hereafter', 'did', 'always', 'who', 'didn', 'whither', 'this', 'someone', 'either', 'each', 'become',
    'thereupon', 'sometime', 'side', 'towards', 'therein', 'twelve', 'because', 'often', 'ten', 'our', 'doing', 'km',
    'eg', 'some', 'back', 'used', 'up', 'go', 'namely', 'computer', 'are', 'further', 'beyond', 'ourselves', 'yet',
    'out', 'even', 'will', 'what', 'still', 'for', 'bottom', 'mine', 'since', 'please', 'forty', 'per', 'its',
    'everything', 'behind', 'does', 'various', 'above', 'between', 'it', 'neither', 'seemed', 'ever', 'across', 'she',
    'somehow', 'be', 'we', 'full', 'never', 'sixty', 'however', 'here', 'otherwise', 'were', 'whereupon', 'nowhere',
    'although', 'found', 'alone', 're', 'along', 'quite', 'fifteen', 'by', 'both', 'about', 'last', 'would',
    'anything', 'via', 'many', 'could', 'thence', 'put', 'against', 'keep', 'etc', 'amount', 'became', 'ltd', 'hence',
    'onto', 'or', 'con', 'among', 'already', 'co', 'afterwards', 'formerly', 'within', 'seems', 'into', 'others',
    'while', 'whatever', 'except', 'down', 'hers', 'everyone', 'done', 'least', 'another', 'whoever', 'moreover',
    'couldnt', 'throughout', 'anyhow', 'yourself', 'three', 'from', 'her', 'few', 'together', 'top', 'there', 'due',
    'been', 'next', 'anyone', 'eleven', 'cry', 'call', 'therefore', 'interest', 'then', 'thru', 'themselves',
    'hundred', 'really', 'sincere', 'empty', 'more', 'himself', 'elsewhere', 'mostly', 'on', 'fire', 'am', 'becoming',
    'hereby', 'amongst', 'else', 'part', 'everywhere', 'too', 'kg', 'herself', 'former', 'those', 'he', 'me', 'myself',
    'made', 'twenty', 'these', 'was', 'bill', 'cant', 'us', 'until', 'besides', 'nevertheless', 'below', 'anywhere',
    'nine', 'can', 'whether', 'of', 'your', 'toward', 'my', 'say', 'something', 'and', 'whereafter', 'whenever',
    'give', 'almost', 'wherever', 'is', 'describe', 'beforehand', 'herein', 'doesn', 'an', 'as', 'itself', 'at',
    'have', 'in', 'seem', 'whence', 'ie', 'any', 'fill', 'again', 'hasnt', 'inc', 'thereby', 'thin', 'no', 'perhaps',
    'latter', 'meanwhile', 'when', 'detail', 'same', 'wherein', 'beside', 'also', 'that', 'other', 'take', 'which',
    'becomes', 'you', 'if', 'nobody', 'unless', 'whereas', 'see', 'though', 'may', 'after', 'upon', 'most', 'hereupon',
    'eight', 'but', 'serious', 'nothing', 'such', 'why', 'off', 'a', 'don', 'whereby', 'third', 'i', 'whole', 'noone',
    'sometimes', 'well', 'amoungst', 'yours', 'their', 'rather', 'without', 'so', 'five', 'the', 'first', 'with',
    'make', 'once'
])

# 去停用词
def remove_stopwords(s):
    s = unicode_to_ascii(s)
    
    return " ".join(w for w in s.split() if w not in STOPWORDS)


def create_dataset(text,):
    data = [remove_stopwords(process_text(line)) for line in text.split(".")]

    return data

def preprowords_and_wordFreqs(sent):
    porter_stemmer = PorterStemmer()
    preprowords = list()  # 经过词干提取后词的集合

    s  =  "" 
    for word in sent:
        s += ' ' + porter_stemmer.stem(word)
        preprowords.append(porter_stemmer.stem(word))
  
    wordFreqs = collections.Counter(str(s).split(" ")).most_common(15)
    wordFreqs = dict(wordFreqs)  # 词频字典
        
    return preprowords, wordFreqs


# ### TF
# 
# 用于估计一个词在一个文档中的重要程度
# 
# $$\text{TF}(w,d) = \frac{n_{w,d}}{\sum_{u \in \{ w_{d}\}}n_{u,d}}$$
# 
# 其中$n_{w,d}$表示词w在文档d中出现的次数,$\{w_{d}\}$表示文档d中所有词的集合

def TFs(sentences):
    tfs = dict() # 记录文档的 词-TF
    
    for sent in sentences:
        sent = sent.split(" ")
        preprowords, wordFreqs = preprowords_and_wordFreqs(sent)
    
        for word,value in wordFreqs.items():       
            if tfs.get(word,0) == 0: # 若此时tfs中没有word,词频计数即为 wordFreqs 中对应的值
                tfs[word] = wordFreqs[word]
            else:
                tfs[word] = tfs[word] + wordFreqs[word]  # 否则将两个字典中对应的值相加
                
    return tfs
        

# ### IDF
# 
# 逆文档频率
# 
# $$\text{IDF}(w,d) = \log(\frac{n}{n_{w}})$$
# 
# 其中$n_{w}$表示包含词w的文档数目

def IDFs(sentences):
    N = len(sentences)
    idf = 0
    idfs,words, = dict(), dict()
    w2 = list()
    
    for sent in sentences:
        sent = sent.split(" ")
        preprowords, wordFreqs = preprowords_and_wordFreqs(sent)
        
        for word in preprowords:
            if wordFreqs.get(word,0) != 0:
                words[word] = words.get(word,0) + 1
    
    for word in words:
        n = words[word]
        
        try:
            w2.append(n)
            idf = math.log10(float(N) / n)
        except ZeroDivisionError:
            idf = 0
            
        idfs[word] = idf
        
    return idfs


# ### TF-IDF
# 
# $$\text{TF-IDF}(w,d) = \text{TF}(w,d) * \text{IDF}(w,d)$$

def TF_IDF(sentences):
    tfs = TFs(sentences)
    idfs = IDFs(sentences)
    
    retval = dict()
    
    for word,value in tfs.items():    
        tf_idfs = tfs[word] * idfs[word]
        if retval.get(tf_idfs,None) == None:
            retval[tf_idfs] = [word]
        else:
            retval[tf_idfs].append(word)
            
    return retval


def sentenceSim(sent1, sent2, IDF_w):
    
    numerator = 0
    denominator = 0

    sent1 = sent1.split(" ")
    preprowords1, wordFreqs1 = preprowords_and_wordFreqs(sent1)
    
    sent2 = sent2.split(" ")
    preprowords2, wordFreqs2 = preprowords_and_wordFreqs(sent2)

    
    for word in preprowords2:
        numerator += wordFreqs1.get(word,0) * wordFreqs2.get(word,0) * IDF_w.get(word,0) ** 2
        
    for word in preprowords1:
        denominator += (wordFreqs2.get(word,0) * IDF_w.get(word,0)) ** 2
    try:
        return numerator / math.sqrt(denominator)
    except ZeroDivisionError:
        return float("-inf")   
        

def build_query(sentences, TF_IDF_w, n):
    
    scores = TF_IDF_w.keys()
    scores = list(scores)
    
    scores.sort(reverse = True)
    
    i = 0
    j = 0
    querywords = list()
    
    while(i < n):
        words = TF_IDF_w[scores[j]]
        for word in words:
            querywords.append(word)
            i += 1
            if i > n:
                break
        j += 1
    
    s = ""
    for word in querywords:
        s += " " + word
    return s


def best_sentence(sentences, query, IDF):
    best_sent = None
    maxVal = float("-inf")
    
    for sent in sentences:
        similarity = sentenceSim(sent,query,IDF)
        
        if similarity > maxVal:
            best_sent = sent
            maxVal = similarity
    sentences.remove(best_sent)
    
    return best_sent

def MMRScore(Si, query, Sj, lambta, IDF):
    Sim1 = sentenceSim(Si, query, IDF)
    l_expr = lambta * Sim1
    value = [float("-inf")]

    for sent in Sj:
        Sim2 = sentenceSim(Si, sent, IDF)
        value.append(Sim2)

    r_expr = (1-lambta) * max(value)
    MMR_SCORE = l_expr - r_expr	

    return MMR_SCORE

def make_summary(sentences, best_sentence, query, summary_length, lambta, IDF):
    summary = [best_sentence]
    preprowords, wordFreqs = preprowords_and_wordFreqs(best_sentence)
    sum_len = len(preprowords)
    
    MMRVal = {}
    
    while sum_len < summary_length:
        MMRVal = {}
        
        for sent in sentences:
            MMRVal[sent] = MMRScore(sent,query,summary,lambta,IDF)

        maxxer = max(MMRVal, key = MMRVal.get)
        summary.append(maxxer)
        sentences.remove(maxxer)
        
        preprowords, wordFreqs = preprowords_and_wordFreqs(maxxer)
        sum_len += len(preprowords)
        
    return summary

TD_w = TFs(article)
IDF_w = IDFs(article)
TF_IDF_w = TF_IDF(article)

query = build_query(article, TF_IDF_w,10)

best1sentence = best_sentence(article,query,IDF_w)

summary = make_summary(article,best1sentence,query,100,0.5,IDF_w)

final_summary = ""

for sent in summary:
    final_summary += " " + sent + "."
    
final_summary = final_summary[:-1]

final_summary


# #### 采用余弦相似度度量句子在文档中的重要性
import os  
import re  
import jieba  
from sklearn.feature_extraction.text import CountVectorizer  
from sklearn.metrics.pairwise import cosine_similarity  
import operator  

def calculateSimilarity(sentence, doc):
    if doc == []:  
        return 0  
    vocab = {}  
    for word in sentence.split():  
        vocab[word] = 0
      
    docInOneSentence = '';  
    for t in doc:  
        docInOneSentence += (t + ' ')  
        for word in t.split():  
            vocab[word]=0    
      
    cv = CountVectorizer(vocabulary=vocab.keys())  
  
    docVector = cv.fit_transform([docInOneSentence])  
    sentenceVector = cv.fit_transform([sentence])  
    return cosine_similarity(docVector, sentenceVector)[0][0]  
             

def compute_scores(sentences):
    scores = {}
    for sent in sentences: 
        sentences.remove(sent)
        score = calculateSimilarity(sent, sentences)
        scores[sent] = score  
        
    return scores

scores = compute_scores(article)
print (scores)

n = 25 * len(article) / 100  
alpha = 0.7  
summarySet = []  
while n > 0:  
    mmr = {}  
    for sentence in scores.keys():  
        if not sentence in summarySet:  
            mmr[sentence] = alpha * scores[sentence] - (1-alpha) * calculateSimilarity(sentence, summarySet)      
    selected = max(mmr.items(), key=operator.itemgetter(1))[0]    
    summarySet.append(selected)  
    n -= 1 

print (str(summarySet))

参考

自动摘要生成(一):最大边界相关算法(MMR)
TF-TDF算法 笔记
摘要抽取算法——最大边界相关算法MMR(Maximal Marginal Relevance) 实践
Python实现利用MMR提取自动摘要

发布了267 篇原创文章 · 获赞 91 · 访问量 19万+

猜你喜欢

转载自blog.csdn.net/Forlogen/article/details/102460066