Gensim and LDA: a quick tour

网址链接:http://nbviewer.jupyter.org/gist/boskaiolo/cc3e1341f59bfbd02726


First, fix the verbosity of the logger. In this example we're logging only warnings, but for a better debug, uprint all the INFOs.

In [1]:
import logging
logging.basicConfig(format='%(levelname)s : %(message)s', level=logging.WARNING)
logging.root.level = logging.WARNING

Now, it's time to get some textual data. We're gonna use the 20 newsgroups dataset (more info here: http://qwone.com/~jason/20Newsgroups). As stated by its creators, it is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups.

To make things more real, we're remving email headers, footers (like signatures) and quoted messages.

In [2]:
from sklearn import datasets
news_dataset = datasets.fetch_20newsgroups(subset='all', remove=('headers', 'footers', 'quotes'))
In [3]:
# A list of text document is contained in the data variable
documents = news_dataset.data

print "In the dataset there are", len(documents), "textual documents"
print "And this is the first one:\n", documents[0]
In the dataset there are 18846 textual documents
And this is the first one:


I am sure some bashers of Pens fans are pretty confused about the lack
of any kind of posts about the recent Pens massacre of the Devils. Actually,
I am  bit puzzled too and a bit relieved. However, I am going to put an end
to non-PIttsburghers' relief with a bit of praise for the Pens. Man, they
are killing those Devils worse than I thought. Jagr just showed you why
he is much better than his regular season stats. He is also a lot
fo fun to watch in the playoffs. Bowman should let JAgr have a lot of
fun in the next couple of games since the Pens are going to beat the pulp out of Jersey anyway. I was very disappointed not to see the Islanders lose the final
regular season game.          PENS RULE!!!


We do now have a collection of documents. Let's start with some preprocessing steps. At first, we're gonna import all the modules we need. Then, we define a word tokenizer (https://en.wikipedia.org/wiki/Tokenization_(lexical_analysis)) with stopword removal (common words like "the", "are" and "and" are excuded from the processing, since they don't have discriminative power and they just increase the processing complexity).

In [4]:
import gensim
from gensim.utils import simple_preprocess
from gensim.parsing.preprocessing import STOPWORDS
In [5]:
def tokenize(text):
    return [token for token in gensim.utils.simple_preprocess(text) if token not in gensim.parsing.preprocessing.STOPWORDS]

print "After the tokenizer, the previous document becomes:\n", tokenize(documents[0])
After the tokenizer, the previous document becomes:
[u'sure', u'bashers', u'pens', u'fans', u'pretty', u'confused', u'lack', u'kind', u'posts', u'recent', u'pens', u'massacre', u'devils', u'actually', u'bit', u'puzzled', u'bit', u'relieved', u'going', u'end', u'non', u'pittsburghers', u'relief', u'bit', u'praise', u'pens', u'man', u'killing', u'devils', u'worse', u'thought', u'jagr', u'showed', u'better', u'regular', u'season', u'stats', u'lot', u'fo', u'fun', u'watch', u'playoffs', u'bowman', u'let', u'jagr', u'lot', u'fun', u'couple', u'games', u'pens', u'going', u'beat', u'pulp', u'jersey', u'disappointed', u'islanders', u'lose', u'final', u'regular', u'season', u'game', u'pens', u'rule']

Next step: tokenise all the documents and build a count dictionary, that contains the count of the tokens over the complete text corpus.

In [6]:
processed_docs = [tokenize(doc) for doc in documents]
word_count_dict = gensim.corpora.Dictionary(processed_docs)
print "In the corpus there are", len(word_count_dict), "unique tokens"
In the corpus there are 95507 unique tokens

We might want to further lower the complexity of the process, removing all the very rare tokens (the ones appearing in less than 20 documents) and the very popular ones (the ones appearing in more than 20% documents; in our case circa 4'000)

In [7]:
word_count_dict.filter_extremes(no_below=20, no_above=0.1) # word must appear >10 times, and no more than 20% documents
In [8]:
print "After filtering, in the corpus there are only", len(word_count_dict), "unique tokens"
After filtering, in the corpus there are only 8121 unique tokens

Let's not build the bag of words representation (https://en.wikipedia.org/wiki/Bag-of-words_model) of the text documents, to create a nice vector space model (https://en.wikipedia.org/wiki/Vector_space_model). Within this methaphor, a vector lists the multiplicity of the tokens appearing in the document. The vector is indexed by the dictionary of tokens, previously built. Note that, since a restricted subset of words appears in each document, this vector is often represented in a sparse way.

In [9]:
bag_of_words_corpus = [word_count_dict.doc2bow(pdoc) for pdoc in processed_docs]
In [10]:
bow_doc1 = bag_of_words_corpus[0]

print "Bag of words representation of the first document (tuples are composed by token_id and multiplicity):\n", bow_doc1
print
for i in range(5):
    print "In the document, topic_id {} (word \"{}\") appears {} time[s]".format(bow_doc1[i][0], word_count_dict[bow_doc1[i][0]], bow_doc1[i][1])
print "..."
Bag of words representation of the first document (tuples are composed by token_id and multiplicity):
[(219, 1), (770, 2), (780, 2), (1353, 1), (1374, 1), (1567, 1), (1722, 2), (2023, 1), (2698, 1), (3193, 1), (3214, 1), (3352, 1), (3466, 1), (3754, 1), (3852, 1), (3879, 1), (3965, 1), (4212, 1), (4303, 2), (4677, 1), (4702, 1), (4839, 1), (4896, 1), (5000, 1), (5242, 5), (5396, 2), (5403, 1), (5453, 2), (5509, 3), (5693, 1), (5876, 1), (5984, 1), (6211, 1), (6272, 1), (6392, 1), (6436, 1), (6676, 1), (6851, 2), (6884, 1), (7030, 1), (7162, 1), (7185, 1), (7370, 1), (7882, 1)]

In the document, topic_id 219 (word "showed") appears 1 time[s]
In the document, topic_id 770 (word "jagr") appears 2 time[s]
In the document, topic_id 780 (word "going") appears 2 time[s]
In the document, topic_id 1353 (word "recent") appears 1 time[s]
In the document, topic_id 1374 (word "couple") appears 1 time[s]
...

Now, finally, the core algorithm of the analysis: LDA. Gensim offers two implementations: a monocore one, and a multicore. We use the monocore one, setting the number of topics equal to 10 (you can change it, and check the results). Try themulticore to prove the speedup!

In [11]:
# LDA mono-core
lda_model = gensim.models.LdaModel(bag_of_words_corpus, num_topics=10, id2word=word_count_dict, passes=5)

# LDA multicore (in this configuration, defaulty, uses n_cores-1)
# lda_model = gensim.models.LdaMulticore(bag_of_words_corpus, num_topics=10, id2word=word_count_dict, passes=5)

Here's a list of the words (and their relative weights) for each topic:

In [12]:
_ = lda_model.print_topics(-1)

Let's print now the topics composition, and their scores, for the first document. You will see that only few topics are represented; the others have a nil score.

In [13]:
for index, score in sorted(lda_model[bag_of_words_corpus[0]], key=lambda tup: -1*tup[1]):
    print "Score: {}\t Topic: {}".format(score, lda_model.print_topic(index, 10))
Score: 0.853884500928	 Topic: 0.015*game + 0.012*team + 0.010*year + 0.009*games + 0.007*st + 0.007*play + 0.006*season + 0.006*hockey + 0.005*league + 0.005*players
Score: 0.0846334499472	 Topic: 0.019*space + 0.008*nasa + 0.007*earth + 0.006*science + 0.005*data + 0.005*research + 0.005*launch + 0.005*center + 0.004*program + 0.004*orbit
Score: 0.0284017012333	 Topic: 0.010*said + 0.010*israel + 0.006*medical + 0.006*children + 0.006*israeli + 0.005*years + 0.005*women + 0.004*arab + 0.004*killed + 0.004*disease
Score: 0.0227330510447	 Topic: 0.011*turkish + 0.011*db + 0.009*armenian + 0.008*turkey + 0.006*greek + 0.006*armenians + 0.006*jews + 0.006*muslim + 0.006*homosexuality + 0.005*turks

That's wonderful! LDA is able to understand that the article is about a team game, hockey, even though the work hockey never appears in the document. Checking the ground truth for that document (the newsgroup category) it's actually correct! It was posted in sport/hockey category. Other topics, if any, account for less than 5%, so they have to be considered marginals (dirt).

In [14]:
news_dataset.target_names[news_dataset.target[0]]
Out[14]:
'rec.sport.hockey'

So far, we have dealt with documents contained in the training set. What if we need to process an unseed document? Fortunately, we don't need to re-train the system (wasting lots of time), as we can just infer its topics.

In [16]:
unseen_document = "In my spare time I either play badmington or drive my car"
print "The unseen document is composed by the following text:", unseen_document
print

bow_vector = word_count_dict.doc2bow(tokenize(unseen_document))
for index, score in sorted(lda_model[bow_vector], key=lambda tup: -1*tup[1]):
    print "Score: {}\t Topic: {}".format(score, lda_model.print_topic(index, 5))
The unseen document is composed by the following text: In my spare time I either play badmington or drive my car

Score: 0.631871020975	 Topic: 0.007*car + 0.005*ll + 0.005*got + 0.004*little + 0.004*power
Score: 0.208106465922	 Topic: 0.015*game + 0.012*team + 0.010*year + 0.009*games + 0.007*st
Score: 0.0200214219043	 Topic: 0.014*windows + 0.014*dos + 0.012*drive + 0.010*thanks + 0.010*card
Score: 0.0200004776176	 Topic: 0.010*said + 0.010*israel + 0.006*medical + 0.006*children + 0.006*israeli
Score: 0.0200003461406	 Topic: 0.009*government + 0.009*key + 0.007*public + 0.005*president + 0.005*law
Score: 0.0200002155703	 Topic: 0.014*god + 0.006*believe + 0.004*jesus + 0.004*said + 0.004*point
Score: 0.0200000317801	 Topic: 0.011*turkish + 0.011*db + 0.009*armenian + 0.008*turkey + 0.006*greek
Score: 0.020000020082	 Topic: 0.013*file + 0.013*edu + 0.010*image + 0.008*available + 0.007*ftp
Score: 0.0200000000038	 Topic: 0.019*space + 0.008*nasa + 0.007*earth + 0.006*science + 0.005*data
Score: 0.0200000000037	 Topic: 0.791*ax + 0.059*max + 0.009*pl + 0.007*di + 0.007*tm
In [17]:
print "Log perplexity of the model is", lda_model.log_perplexity(bag_of_words_corpus)
Log perplexity of the model is -7.58115143751

猜你喜欢

转载自blog.csdn.net/northhan/article/details/51124831
LDA