Introduction to self-paced learning self-paced learning

For research experts in this area, please refer to Teacher Meng Deyu of Xi’an Jiaotong University

His recent research direction:
Fundamental problems in machine learning and computer vision, especially including:

Meta-learning
Variational bayesian methods on inverse problems
Robust and interpretable deep learning

1.1 Course study

Reference reading

http://huangc.top/2021/06/13/Curriculum-Learning-2021/

1.2 Self-paced curriculum learning

https://ojs.aaai.org/index.php/AAAI/article/view/9608;

Curriculum learning (CL) or self-paced learning (SPL) represents a recently proposed learning regime inspired by human and animal learning processes that gradually progresses from simpler to more complex samples during training. These two methods have similar conceptual learning paradigms, but different specific learning schemes.

In CL, the curriculum is predetermined by prior knowledge and remains fixed thereafter. Therefore, this method relies heavily on the quality of prior knowledge and ignores feedback to the learner. In SPL, the curriculum is determined dynamically to adapt to the learning speed of the leaner. However, SPL cannot handle prior knowledge, making it prone to overfitting.

In this paper, we discover the missing link between CL and SPL and propose a unifying framework called Self-Paced Curriculum Slant (SPCL). SPCL is formulated as a concise optimization problem that takes into account both the prior knowledge known before training and the learning progress during training. Compared with human education, SPCL is similar to the "lecturer-student-collaboration" learning model, rather than the "lecturer-driven" in CL or the "student-driven" in SPL. Empirically, we demonstrate the superiority of SPCL on two tasks

1.3 The relationship between the two

Self-Paced Learning (SPL) is learning at your own pace. This article is improved on the basis of Curriculum Learning (CL) proposed by Bengio in 2009. CL is inspired by the human cognitive process - when people learn, they usually learn the simple ones first, and then gradually move on to the relatively difficult parts. However, the CL proposed by Bengio is based on some fixed prior, ranking the difficulty of the samples before inputting them into the model. The biggest difference between SPL and CL is that sample scheduling can be directly embedded into the model, which is a dynamic and optimizable process.

Article link: Self-Paced Learning for Latent Variable Models NIPS 2010
Author affiliation: School of Computer Science, Stanford University

The basic idea is to use the dual relationship between loss size and difficulty to weight the learned samples. This weighted format is similar to the EM algorithm after introducing hidden variables, making the model more robust to the learning of data distribution.

Insert image description here

Insert image description here

Articles related to Teacher Meng
SPaR: Lu Jiang, Deyu Meng, Qian Zhao, Shiguang Shan, Alexander Hauptmann. Self-pacedCurriculum Learning. AAAI, 2015.

SPMF: Qian Zhao, Deyu Meng, Lu Jiang, Qi Xie, Zongben Xu, Alexander Hauptmann. Self-paced Matrix Factorization. ACM MM, 2014.

SPLD: Lu Jiang, Deyu Meng, Shoou-I Yu, Zhen-Zhong Lan, Shiguang Shan, AlexanderHauptmann. Self-paced Learning with Diversity. NIPS, 2014.

SPCL: Lu Jiang, Deyu Meng, Teruko Mitamura, Alexander Hauptmann. Easy Samples First: Self-paced Reranking for Zero-Example Multimedia Search. AAAI. 2015.

2.

2.1

2.2

2.3

3.

3.1

3.2

3.3

Guess you like

Origin blog.csdn.net/chumingqian/article/details/132605489