转载:A Light Introduction to Transfer Learning for NLP

1. Pre-training allows the model to capture and learn a variety of linguistic phenomena, such as long-term dependence and negative from large corpus.

2. Then use the (transfer) the knowledge and training to initialize a model to fine tune performance on a specific NLP tasks well, such as emotional classification.

3. It works in the language, because it does take place in the language, for example, the negative sentiment polarity detection is an important attribute from a text message. In addition, the negative may also be useful for detecting irony, satire such as testing, which is one of the most unresolved NLP most complex tasks.

4. Having a universal language model attributes the lack of annotation data set or language resources in NLP studies may be useful.

5. So far, we know that the knowledge gained from pre-trained language model, for example in the form of embedded word, for many NLP tasks performed very well.

6. The problem here is that this feature exists in a latent form of knowledge is not broad or not enough on the target or downstream task to perform well.

The following questions:

What do we mean by modeling deep contextualized representations in the context of language

By modeling expressed deep in the context of the language environment, what we meant to be.


What is the model really learning

What is the model of true learning.


How to build and train these pretrained language models

How to build and train these pre-trained language model.


What are the key components of the pretrained language models and how to improve them

Pre-key training language model component is part of what and how to improve them.


How do we use or apply them to solve different language-based problems

How do we use or apply them to solve different problems Language.


What are the benefits of pretrained language models as compared to conventional transfer learning techniques f
or NLP

Compared with traditional learning NLP technology transfer, pre-trained language model any good.


What are the drawbacks

What are the disadvantages.


What aspects of natural language do we need to keep in mind when pretraining language models

When training a language model, we need to remember what natural language.


What kinds of pretraining tasks are we considering to build and test these so-called generalizable NLP systems


We are considering what to build and test these so-called pre-mission training can promote NLP systems.


And more importantly, what kinds of datasets should we use that are representative enough to address the wide
range of NLP tasks

More importantly, what type of data collection should we use these data sets is sufficient to represent a broad range of NLP tasks.

Guess you like

Origin www.cnblogs.com/muhanxiaoquan/p/11136481.html