DataWhale team punch learning camp task02-1

Text preprocessing

Text is a class of sequence data, an article can be viewed as a sequence of characters or words, this section describes the text data common pretreatment step, generally comprising four preprocessing steps:
1. Read Text
2 word
3. establishing dictionary, each word is mapped to a unique index (index)
4. the text from the word sequence into a sequence index, convenient input model

Reads the text
we use an English novel, namely HG Well's Time Machine, as an example, to show the specific process of text preprocessing.

import collections
import re

def read_time_machine():
    with open('/home/kesci/input/timemachine7163/timemachine.txt', 'r') as f:
        lines = [re.sub('[^a-z]+', ' ', line.strip().lower()) for line in f]
    return lines

lines = read_time_machine()
print('# sentences %d' % len(lines))# sentences 3221

Word
we word of each sentence, a sentence that is divided into a number of words (token), is converted into a sequence of words.

def tokenize(sentences, token='word'):
    """Split sentences into word or char tokens"""
    if token == 'word':
        return [sentence.split(' ') for sentence in sentences]
    elif token == 'char':
        return [list(sentence) for sentence in sentences]
    else:
        print('ERROR: unkown token type '+token)

tokens = tokenize(lines)
tokens[0:2]

[[‘the’, ‘time’, ‘machine’, ‘by’, ‘h’, ‘g’, ‘wells’, ‘’], [’’]]

Establish dictionary
in order to facilitate processing model, we need to convert the string to a number. So we need to build a dictionary (vocabulary), maps each word into a unique index number.

class Vocab(object):
    def __init__(self, tokens, min_freq=0, use_special_tokens=False):
        counter = count_corpus(tokens)  # : 
        self.token_freqs = list(counter.items())
        self.idx_to_token = []
        if use_special_tokens:
            # padding, begin of sentence, end of sentence, unknown
            self.pad, self.bos, self.eos, self.unk = (0, 1, 2, 3)
            self.idx_to_token += ['', '', '', '']
        else:
            self.unk = 0
            self.idx_to_token += ['']
        self.idx_to_token += [token for token, freq in self.token_freqs
                        if freq >= min_freq and token not in self.idx_to_token]
        self.token_to_idx = dict()
        for idx, token in enumerate(self.idx_to_token):
            self.token_to_idx[token] = idx

    def __len__(self):
        return len(self.idx_to_token)

    def __getitem__(self, tokens):
        if not isinstance(tokens, (list, tuple)):
            return self.token_to_idx.get(tokens, self.unk)
        return [self.__getitem__(token) for token in tokens]

    def to_tokens(self, indices):
        if not isinstance(indices, (list, tuple)):
            return self.idx_to_token[indices]
        return [self.idx_to_token[index] for index in indices]

def count_corpus(sentences):
    tokens = [tk for st in sentences for tk in st]
    return collections.Counter(tokens)  # 返回一个字典,记录每个词的出现次数

We look at an example, where we try to build a dictionary with Time Machine as a corpus.

vocab = Vocab(tokens)
print(list(vocab.token_to_idx.items())[0:10])
#[('', 0), ('the', 1), ('time', 2), ('machine', 3), ('by', 4), ('h', 5), ('g', 6), ('wells', 7), ('i', 8), ('traveller', 9)]

The index word into
a dictionary, we can convert the original text of the sentence from the word sequence to sequence index

for i in range(8, 10):
    print('words:', tokens[i])
    print('indices:', vocab[tokens[i]])

words: [‘the’, ‘time’, ‘traveller’, ‘for’, ‘so’, ‘it’, ‘will’, ‘be’, ‘convenient’, ‘to’, ‘speak’, ‘of’, ‘him’, ‘’]
indices: [1, 2, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 0]
words: [‘was’, ‘expounding’, ‘a’, ‘recondite’, ‘matter’, ‘to’, ‘us’, ‘his’, ‘grey’, ‘eyes’, ‘shone’, ‘and’]
indices: [20, 21, 22, 23, 24, 16, 25, 26, 27, 28, 29, 30]

Word segmentation using existing tools
we word the way described earlier is very simple, it has at least the following disadvantages:

1. Punctuation can usually provide semantic information, but we approach it directly discarded
2. similar "should not", "does not " such a word would be processed incorrectly
3. similar to "Mr.", "Dr. "this word is mishandled

We can be solved by the introduction of more complex rules of these problems, but in fact, there are a number of existing tools can be a good word, we briefly outline the two of them: spaCy and NLTK.

The following is a simple example:

text = "Mr. Chen doesn't agree with my suggestion."

spaCy:

import spacy
nlp = spacy.load('en_core_web_sm')
doc = nlp(text)
print([token.text for token in doc])

[‘Mr.’, ‘Chen’, ‘does’, “n’t”, ‘agree’, ‘with’, ‘my’, ‘suggestion’, ‘.’]

NLTK:

from nltk.tokenize import word_tokenize
from nltk import data
data.path.append('/home/kesci/input/nltk_data3784/nltk_data')
print(word_tokenize(text))

[‘Mr.’, ‘Chen’, ‘does’, “n’t”, ‘agree’, ‘with’, ‘my’, ‘suggestion’, ‘.’]

Published 31 original articles · won praise 0 · Views 810

Guess you like

Origin blog.csdn.net/qq_44750620/article/details/104315337