Rasa Course, Rasa Training, Rasa Interview, Understanding Word Embeddings GloVe of Rasa Practical Series

Rasa Course, Rasa Training, Rasa Interview, Understanding Word Embeddings GloVe of Rasa Practical Series

one hot encoding VS word2vec

insert image description here

  • One-hot encoding example
    Each word has its own value in the vector, but it loses the intrinsic meaning of the words in the sentence and loses the context of the sentence. The sparse vector similarity of two words is calculated as 0
    insert image description here
  • word2vec example
    Different words have almost similar values ​​on different features (here gender, royal, age and food are features). Because of this, word embeddings can preserve contextual information of sentences from words and, after training on large datasets, can even identify words that are not available in vector representations in sentences. This way, we can use models trained on large datasets on relatively small unlabeled datasets. Therefore, word embeddings can be used for transfer learning. Due to this property, word embeddings are very useful in a wide range of applications such as named entity recognition, text summarization, co-reference parsing and parsing.

Guess you like

Origin blog.csdn.net/duan_zhihua/article/details/123487009