cnn怎么用在自然语言处理(NLP)中

来自 http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/

SO, HOW DOES ANY OF THIS APPLY TO NLP?

Instead of image pixels, the input to most NLP tasks are sentences or documents represented as a matrix. Each row of the matrix corresponds to one token, typically a word, but it could be a character. That is, each row is vector that represents a word. Typically, these vectors are word embeddings (low-dimensional representations) like word2vec or GloVe, but they could also be one-hot vectors that index the word into a vocabulary. For a 10 word sentence using a 100-dimensional embedding we would have a 10×100 matrix as our input. That’s our “image”.

In vision, our filters slide over local patches of an image, but in NLP we typically use filters that slide over full rows of the matrix (words). Thus, the “width” of our filters is usually the same as the width of the input matrix. The height, or region size, may vary, but sliding windows over 2-5 words at a time is typical. Putting all the above together, a Convolutional Neural Network for NLP may look like this (take a few minutes and try understand this picture and how the dimensions are computed. You can ignore the pooling for now, we’ll explain that later):

把每个单词用word embeddings 处理成向量,然后一个句子拼成一张图。下图中第二列的彩色图像为卷积核。下面模型用来解决一个分类问题。


猜你喜欢

转载自blog.csdn.net/hzq20081121107/article/details/70237993