Embedded vector word word word embedding

Word embedded word embedding

embedding Embed

embedding: embedding, showing a map f mathematically: x-> y, where x is a mapping space is a space to go where y and x in each space has y x y unique space corresponding thereto. Embedding, i.e. the x position to find a space embedded in the y, a x y to a unique embedded.

word embedding the embedded word

That is expected to present the text library every word is embedded into a vector space which, words and each word corresponds to a unique vector, which is the word vector.

So, one-hot word is an implementation Embedding of a program in order to achieve word embedding word2Vec also proposed.

Why propose word Embedding?

The essence of the idea is that the every word corpus, the only vector maps into vector space among this vector in space orientation and position can somehow measure the meaning of the word, and so emotional. So from after the traditional statistics-based n-gram model, a one-hot model, began to word vector, and then found one-hot model belongs to the hard classification, only different and the same words in the message, lost meaning, unreasonable Ci meaning between, so he proposed the word vector model depth learning to train, and later improved, with now more famous word2vec model.
word2vec model itself is actually a model containing two algorithms, the word corpus is mapped to the vector space, the word means to obtain a vector.

embedding layer

Beginner nlp knowledge, we often hear embedding layer, I did not know what that means.
I understand the embedding layer is a layer fully connected. This parameter is fully connected layers, that is, a matrix. The one-hot encoding full-word and the connection layer matrix multiplication coefficient to obtain a new vector, the vector is a vector word, this one layer is referred to a fully connected embedding layer, in fact, for a speaking word mapped to the mapping matrix vector. Then such a parameter coefficient matrix, that is, this whole argument embedding layer connection layer is how to get, to learn in detail about the two algorithms word2vec models inside, said bluntly point is with deep learning to do other tasks on the word when the coefficient of the first layer is retained fully connected layers, i.e. the matrices mappings, i.e. embedding layer.

Written in the last

Specific word2vec model is kind of how, for the time being is not here elaborate. I just re-learning process, the vast majority both to explain and talk wordEmbedding word2vec mix that also was not clear what is the word embedding, see seniors said very clearly, I will own understanding recorded for others reference. Of course, I understand it might be right , and welcome criticism

Guess you like

Origin www.cnblogs.com/Lin-Yi/p/11611428.html