What is an Embedding in Keras?

Keras documentation isn't clear what this actually is. I understand we can use this to compress the input feature space into a smaller one. But how is this done from a neural design perspective? Is it an autoenocder, RBM?

As far as I know, the Embedding layer is a simple matrix multiplication that transforms words into their corresponding word embeddings.


The weights of the Embedding layer are of the shape (vocabulary_size, embedding_dimension). For each training sample, its input are integers, which represent certain words. The integers are in the range of the vocabulary size. The Embedding layer transforms each integer i into the ith line of the embedding weights matrix.


In order to quickly do this as a matrix multiplication, the input integers are not stored as a list of integers but as a one-hot matrix. Therefore the input shape is (nb_words, vocabulary_size) with one non-zero value per line. If you multiply this by the embedding weights, you get the output in the shape


(nb_words,vocab_size)x(vocab_size,embedding_dim)=(nb_words, embedding_dim)


So with a simple matrix multiplication you transform all the words in a sample into the corresponding word embeddings.


https://stackoverflow.com/questions/38189713/what-is-an-embedding-in-keras

猜你喜欢

转载自blog.csdn.net/xiewenbo/article/details/79897310