Rasa Course, Rasa Training, Rasa Interview, General Embeddings vs. Specific Problems of Rasa Practical Series

Rasa Course, Rasa Training, Rasa Interview, General Embeddings vs. Specific Problems of Rasa Practical Series

General Embeddings vs. Specific Problems

DIET paper https://arxiv.org/abs/2004.09936
DIET: Lightweight Language Understanding for Dialogue Systems

Large-scale pre-trained language models have shown impressive results on language understanding benchmarks such as GLUE and SuperGLUE, with large improvements over other pre-training methods such as distributed representations (GloVe) and purely supervised methods. We introduce the Dual Intent and Entity Transformer (DIET) architecture and investigate the effectiveness of different pretrained representations on two common dialogue language understanding tasks, intent and entity prediction. DIET advances the state-of-the-art on complex multi-domain NLU datasets and achieves similar high performance on other simpler datasets. Surprisingly, we show that using a large pretrained model for this task has no discernible benefit, and in fact DIET improves on the current state of the art even in a purely supervised setting without any pretrained embeddings.
insert image description here

Guess you like

Origin blog.csdn.net/duan_zhihua/article/details/123726442