Paper Learning - Generalized Few Shot Classification Problems

In general, there are two challenges in few-shot classification:

Asymmetric classification: The number of samples in the base class is far greater than that in the novel class, so the classifier usually tends to determine that the node is a base node rather than a novel node.

Inconsistent preference: Personal understanding is that the weight of learning base node features is different from the weight of novel node features.

There is a vivid example of inconsistent preferences:

For example, if the same class will be connected, then it is best to focus on a small nearby range for those with more elements in the class (multi-camera state). However, if there are fewer elements in the class (less shot state), it is better to focus on a farther and larger range.

To solve these two problems, a meta-learning method is proposed:

A rough understanding is probably that a large number of samples of all tasks are used to train the meta-learner, and then the meta-learner helps the classifier to better train and learn the test samples of the test task.

This meta-learning method can be used in few-shot learning.

It is worth mentioning that this solution has an assumption that all test nodes must be independently sampled from the novel class, which is difficult to achieve in real-world applications.

Therefore, a generalized few-shot classification problem is proposed, that is, the test node should be sampled from both base and novel.

 

It is worth mentioning that the weight selector consists of two parts:

First, the same learner as the meta-learner

Second, different weights are selected according to shot-aware, because the test samples are not only novel nodes, but also base nodes, and the most suitable weights of the two are different.

 

Guess you like

Origin blog.csdn.net/weixin_62375715/article/details/130247334