Domain adaptation of Continual Test-Time

Domain adaptation of Continual Test-Time

Table of contents

  • Preface
  • Related work
    • Source Data Adaptation
    • Target Data Adaptation
  • Overview of CoTTA
  • CoTTA detailed introduction
    • Weight-Averaged Pseudo-Labels
    • Augmentation-Averaged Pseudo-Labels
    • Stochastic Restoration
  • experiment
  • in conclusion
  • reference

Preface

Continual Test-Time Domain Adaptation (CoTTA) was proposed at CVPR 2022, with the purpose of adapting the source pre-trained model to the target domain (target domain) without using any source data (source domain). Existing research mainly focuses on dealing with static target domain situations. However, in the real world, machine perception systems must operate in unstable and changing environments, where the distribution of target domains changes over time.

Existing methods are mainly based on self-training and entropy regularization, but they may still be affected by these non-stationary environments. As the distribution within the target domain shifts over time, pseudo-labels become unreliable. Therefore, noisy pseudo-annotations further lead to error accumulation and catastrophic forgetting. To deal with these issues, this article proposes a test-time domain adaptation approach (CoTTA).

Before formally introducing CoTTA, let us first familiarize ourselves with some related work.

Related work

Guess you like

Origin blog.csdn.net/weixin_43838785/article/details/131365902