【NLP】英文数据预处理___Gensim(doc2bow LDA)

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接: https://blog.csdn.net/YWP_2016/article/details/102561755

目录

理论

主流NLP包的区别

代码

准备工作之引入包、数据

预处理之大小写转换

预处理之去特殊符号

预处理之去停用词

预处理之词性标注+词形还原

建模之文本向量化(doc2bow)

建模之LDA

结果

all_code

思考

参考(有删改)


理论

主流NLP包的区别

以NLTK、Sklearn以及Gensim为例

  •  NLTK一般用于文本预处理(词干/词元化,POS标记,解析等)
  • Gensim一般用于主题建模和文档相似性分析
  • Sklearn主要用于机器学习(分类,聚类等)

英文文档

  • NLTK is specialized on gathering and classifying unstructured texts. If you need e.g. a POS-tagger, lematizer, dependeny-analyzer, etc, you’ll find them there, and sometimes nowhere else. It offers a quit broad range of tools developped mainly in academic research. But: most often it is not very well optimized - involving NLTK libraries often means to accept a huge performance loss. If you do text-gathering or -preprocessing, its fine to begin with - until you found some faster alternatives.

  • SKLEARN is a much more an analyzing tool, rather than an gathering tool. Its greatly documented, well optimized, and covers a broad range of statistical methods.

  • GENSIM is a very well optimized, but also highly specialized, library for doing jobs in the periphery of “WORD2DOC”. That is: it offers an easy and surpringly well working and swift AI-approach to unstructured texts. If you are interested in prodution, you might also have a look on TensorFlow, which offers a mathematically generalized, yet highly performant, model.

Conclusion: Although considerably overlapping, I personally prefer using NLTK for the pre-processing of natural text (i.e., gathering, wrangling, stemming, POS-tagging, filtering and ‘noise’-reduction), GENSIM as kind of base platform (for autoencoding, semantic (topics) and syntactic (sequence) pattern- and as such for similiarity- recognition, dimensionality reduction, and for multilabel classification), and SKLEARN, which easily can be mixed up with NLTK and GENSIM, for third step evaluation / ensembling / optimizing / processing issues.三者可结合使用

Generally,
- NLTK is used primarily for general NLP tasks (tokenization, POS tagging, parsing, etc.)
- Sklearn is used primarily for machine learning (classification, clustering, etc.)
- Gensim is used primarily for topic modeling and document similarity.

代码

准备工作之引入包、数据

import re
import numpy as np
import pandas as pd
from pprint import pprint
# Gensim
import gensim
import gensim.corpora as corpora
from gensim.utils import simple_preprocess
from gensim.models import CoherenceModel
# spacy for lemmatization
import spacy
nlp = spacy.load('en', disable=['parser', 'ner'])
# Plotting tools
import pyLDAvis
import pyLDAvis.gensim  # don't skip this
import matplotlib.pyplot as plt
# Enable logging for gensim - optional
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.ERROR)
import warnings
warnings.filterwarnings("ignore",category=DeprecationWarning)
#导入NLTK停用词包
from nltk.corpus import stopwords
stop_words = stopwords.words('english')
stop_words.extend(['from', 'subject', 're', 'edu', 'use'])#自行扩充停用词表

#data=("I love apples#   &   3241","he likes PIG3s","she do not like anything,except apples.\.")
f=open('xxx.txt','r',encoding='utf-8')
data=f.readlines()
#注意:f.read()返回字符串,f.readlines()返回列表

预处理之大小写转换

预处理之去特殊符号

def sent_to_words(sentences):
    for sentence in sentences:
        yield(gensim.utils.simple_preprocess(str(sentence), deacc=True))  # deacc=True removes punctuations

data_words = list(sent_to_words(data))

预处理之去停用词

def remove_stopwords(texts):
    return [[word for word in simple_preprocess(str(doc)) if word not in stop_words] for doc in texts]
data_words_nostops = remove_stopwords(data_words)

预处理之词性标注+词形还原

#只保留POS处理后的n、v、adj、adv,再做词形还原
def lemmatization(texts, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']):
    """https://spacy.io/api/annotation"""
    texts_out = []
    for sent in texts:
        doc = nlp(" ".join(sent))
        texts_out.append([token.lemma_ for token in doc if token.pos_ in allowed_postags])
    return texts_out
# Do lemmatization keeping only noun, adj, vb, adv
data_lemmatized = lemmatization(data_words_nostops, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV'])

建模之文本向量化(doc2bow)

#Doc2Bow是Gensim中封装的一个方法,主要用于实现Bow模型
# Create Dictionary
id2word = corpora.Dictionary(data_lemmatized)
# Create Corpus
texts = data_lemmatized
# Term Document Frequency
corpus = [id2word.doc2bow(text) for text in texts]

建模之LDA

#依然基于gensim
lda_model = gensim.models.ldamodel.LdaModel(corpus=corpus,
                                           id2word=id2word,
                                           num_topics=2,
                                           random_state=100,
                                           update_every=1,
                                           chunksize=100,
                                           passes=10,
                                           alpha='auto',
                                           per_word_topics=True)

结果

# Print the Keyword in the 10 topics
pprint(lda_model.print_topics())
doc_lda = lda_model[corpus]

all_code

#引入包、导入数据
import re
import numpy as np
import pandas as pd
from pprint import pprint
# Gensim
import gensim
import gensim.corpora as corpora
from gensim.utils import simple_preprocess
from gensim.models import CoherenceModel
# spacy for lemmatization
import spacy
nlp = spacy.load('en', disable=['parser', 'ner'])
# Plotting tools
import pyLDAvis
import pyLDAvis.gensim  # don't skip this
import matplotlib.pyplot as plt
# Enable logging for gensim - optional
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.ERROR)
import warnings
warnings.filterwarnings("ignore",category=DeprecationWarning)
#导入NLTK停用词包
from nltk.corpus import stopwords
stop_words = stopwords.words('english')
stop_words.extend(['from', 'subject', 're', 'edu', 'use'])#自行扩充停用词表
#data=("I LOVE apples#   &   3241","he likes PIG3s","she do not like anything,except apples.\.")
f=open('xxx.txt','r',encoding='utf-8')
data=f.readlines()
#注意:f.read()返回字符串,f.readlines()返回列表

#大小写转换
#去特殊符号
def sent_to_words(sentences):
    for sentence in sentences:
        yield(gensim.utils.simple_preprocess(str(sentence), deacc=True))  # deacc=True removes punctuations
data_words = list(sent_to_words(data))
print(data_words)

#去停用词
def remove_stopwords(texts):
    return [[word for word in simple_preprocess(str(doc)) if word not in stop_words] for doc in texts]
data_words_nostops = remove_stopwords(data_words)


#只保留POS处理后的n、v、adj、adv,再做词形还原
def lemmatization(texts, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']):
    """https://spacy.io/api/annotation"""
    texts_out = []
    for sent in texts:
        doc = nlp(" ".join(sent))
        texts_out.append([token.lemma_ for token in doc if token.pos_ in allowed_postags])
    return texts_out
# Do lemmatization keeping only noun, adj, vb, adv
data_lemmatized = lemmatization(data_words_nostops, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV'])

#创建主题建模所需的词典和语料库(词袋模型)
# Create Dictionary
id2word = corpora.Dictionary(data_lemmatized)
# Create Corpus
texts = data_lemmatized
# Term Document Frequency
corpus = [id2word.doc2bow(text) for text in texts]

#构建主题模型
#依然基于gensim
lda_model = gensim.models.ldamodel.LdaModel(corpus=corpus,
                                           id2word=id2word,
                                           num_topics=2,
                                           random_state=100,
                                           update_every=1,
                                           chunksize=100,
                                           passes=10,
                                           alpha='auto',
                                           per_word_topics=True)

#查看LDA模型中的主题
# Print the Keyword in the 10 topics
pprint(lda_model.print_topics())
doc_lda = lda_model[corpus]

思考

  • Gensim采用doc2bow进行文本向量化,效果时好时坏,不稳定。后续考虑使用TF-IDF或词向量模型代替词袋模型。
  • 有文章指出(https://blog.csdn.net/yinghe_one/article/details/89303949):基于mallet(JAVA)实现Gensim-LDA效果更佳。
  • 未进行结果可视化,后续考虑使用pyLDAvis可视化主题。

参考(有删改)

猜你喜欢

转载自blog.csdn.net/YWP_2016/article/details/102561755
LDA