In 2020, multi-source method to adapt to new domain MDA Summarized

Wang recommended to migrate love know almost Series

https://zhuanlan.zhihu.com/p/66130006
Here Insert Picture Description

In 2020, multi-source method to adapt to new domain MDA Summarized

Multi-source Domain Adaptation in the Deep Learning Era:A Systematic Survey(2020)
https://arxiv.org/pdf/2002.12169.pdf

Most methods em-ploy shared feature extractors to learn domain-invariant fea-tures. However, domain invariance may be detrimental todiscriminative power.
Most of the methods used to learn the feature extractor shared domain invariant features.
However, the domain invariance may impair judgment ability. So this in two ways
Weight sharing of feature extractor feature weight of the unshare Share
Althoughunshared Feature extractors The target CAN Better align = left andsources in Space The latent, of the this Substantially Parameters Increases in thenumber of The Model. Although non-shared feature extractor may be better align the potential space in the target and source, but it greatly increases the number of parameters in the model.

4 Deep Multi-source Domain AdaptationExisting methods on deep MDA primarily focus on the unsu-pervised, homogeneous, closed set, strongly supervised, onetarget, and target data available settings. That is, there is onetarget domain, the target data is unlabeled but available dur-ing the training process, the source data is fully labeled, thesource and target data are observed in the same data space,and the label sets of all sources and the target are the same. Inthis paper, we focus on MDA methods under these settings.

现有的deep MDA方法主要集中在无监督、齐次、闭集、强监督、单目标和目标数据的可用设置上。即有一个目标域,在训练过程中目标数据是未标记但可用的,源数据是完全标记的,源数据和目标数据在同一数据空间中观察,所有源和目标的标记集是相同的。在这篇文章中,我们将重点讨论这些设置下的MDA方法

5 Conclusion and Future DirectionsIn this paper, we provided a survey of recent MDA develop-ments in the deep learning era. We motivated MDA, defineddifferent MDA strategies, and summarized the datasets thatare commonly used for performing MDA evaluation. Oursurvey focused on a typical MDA setting,i.e.unsupervised,homogeneous, closed set, and one target MDA. We classi-fied these methods into different categories, and comparedthe representative ones technically and experimentally. Weconclude with several open research directions:Specific MDA strategy implementation.As introducedin Section 2, there are many types of MDA strategies, and im-plementing an MDA strategy according to the specific prob-lem requirement would likely yield better results than a one-size-fits-all MDA approach. Further investigation is neededto determine which MDA strategies work the best for whichtypes of problems. Also, real-world applications may have asmall amount of labeled target data; determining how to in-clude this data and what fraction of this data is needed for acertain performance remains an open question.Multi-modal MDA.The labeled source data may be ofdifferent modalities, such as LiDAR, radar, and image. Fur-ther research is needed to find techniques for fusing differentdata modalities in MDA. A further extension of this idea is tohave varied modalities in defferent sources as well as partiallylabeled, multi-modal sources.Incremental and online MDA.Designing incrementaland online MDA algorithms remains largely unexplored andmay provide great benefit for real-world scenarios, such asupdating deployed MDA models when new source or targetdata becomes available

5.结论和未来发展方向本文综述了深度学习时代MDA的最新发展。我们激发了MDA,定义了不同的MDA策略,并总结了通常用于执行MDA评估的数据集。我们的调查集中在一个典型的MDA设置上,即无监督、同质、闭集和一个目标MDA。我们将这些方法分为不同的类别,并对具有代表性的方法进行了技术和实验上的比较。

具体的MDA策略实现。如第2节所述,MDA策略有多种类型,根据具体的问题需求改进MDA策略可能比一个适合所有MDA的方法产生更好的结果。需要进一步的研究来确定哪些MDA策略最适合哪些类型的问题。另外,现实世界中的应用程序可能有大量标记的目标数据;确定如何包含这些数据以及这些数据中的哪一部分用于某个性能仍然是一个悬而未决的问题

多模态MDA。标记的源数据可能有不同的模式,如LiDAR、雷达和图像。进一步的研究需要找到在丙二醛中融合不同数据模式的技术。这一思想的另一个扩展是在不同的源以及部分带状的多模态源中有不同的模式

Here Insert Picture Description主要手段有,
特征对齐损失:ganloss L1loss L2loss MMD(and maximum mean discrepancy)
wasserstein loss
通常用的分类损失:
源域目标域特征share 和 unshared

【论文笔记 CVPR-2019】《Domain-Symmetric Networks for Adversarial Domain Adaptation》 https://arxiv.org/pdf/1904.04663.pdf

我们提出的方法SymNet包括一个针对目标域的任务分类器,还有一个与源域和目标域分类器共享神经元的额外分类器。介绍了一种训练SymNet的新的对抗学习方法,其中包含类别级别和域级别的混淆从而从类别级别加强域不变特征的学习。与此同时,提出一种跨域训练方法使得目标域分类器的学习与源域分类器的学习更加对称。
原文链接:https://blog.csdn.net/puchapu/article/details/90349610
本文贡献如下:
1.提出全新的SymNet方法,通过设计共享神经元的额外分类器完成域判别性和域混淆的学习。
2.提出基于两级别域混淆损失的全新域对抗训练方法。通过将不同域相同类别进行混淆,提升了直接域混淆的效果。
3.在标准数据集上取得了state of the art。消融学习也显示了所提出模块的必要性。
Here Insert Picture Description

***Multi-source Distilling Domain Adaptation(2020)

https://arxiv.org/pdf/1911.11554.pdf
https://github.com/daoyuan98/MDDA***

本文提出了一种新的多源提取域自适应(MDDA)网络,该网络不仅考虑了多源和目标之间的距离不同,而且研究了源样本与目标样本的不同相似性。具体来说,该算法包括四个阶段:(1)分别利用各训练源的训练数据对源分类器进行预训练;(2)通过最小化源与目标之间的经验方差,分别将目标映射到各源的特征空间;(3)选择源训练样本更接近目标以微调源分类器;以及(4)通过相应的源分类器对每个编码的目标特征进行分类,并使用对应于每个源和目标之间的差异的相应域权重聚合不同的预选词。在公开的DA基准上进行了扩展实验,结果表明,所提出的MDDA显著地优于最先进的方法
Here Insert Picture DescriptionHere Insert Picture Description

迁移学习的域适应损失介绍 和gan 损失有联系

1、迁移学习MMD损失Maximum Mean Discrepancy(https://blog.csdn.net/zkq_1986/article/details/86747841)
2、迁移学习Wasserstein Distance (【迁移学习】Wasserstein Distance Guided Representation Learning for Domain Adaptation 论文解读)
https://github.com/RockySJ/WDGRL
3、ganloss到 Wasserstein ganloss(WGAN), 改进方法和公式推导 https://zhuanlan.zhihu.com/p/25071913
4、MMD-GAN 对MMD-GAN进行了优化
improving MMD-GAN training with repulsive loss function(https://arxiv.org/pdf/1611.04488.pdf)
本文是ICLR2019的一篇文章,对MMD-GAN进行了优化
https://github.com/richardwth/MMD-GAN

###################################################################

1 2019,AdaGraph,AdaGraph: Unifying Predictive and Continuous Domain Adaptation through Graphs(AdaGraph: 通过图形统一预测和连续域自适应)

本文重点研究预测域自适应场景,即没有目标数据可用的情况下,系统必须学习从带注释的源图像和来自辅助域的带关联元数据的未标记样本进行泛化
https://arxiv.org/abs/1903.07062
https://github.com/mancinimassimiliano/adagraph

论文定义了域适应的一种场景,预测域适应 PDA,没有目标域数据,系统需要从标注的源域数据和辅助域带有元数据的无标注样本
https://blog.csdn.net/fuyouzhiyi/article/details/94441304

Here Insert Picture Description
Here Insert Picture Description

2 最大分类器差异的领域自适应(MCD_DA)2018

方法值得一读,也有代码
Maximum Classifier Discrepancy for Unsupervised Domain Adaptation
(https://zhuanlan.zhihu.com/p/52085426)
简单明了有代码
https://github.com/mil-tokyo/MCD_DA
Here Insert Picture Description
最后, VisDA Classification Dataset是目前最大的跨领域物体分类数据集,作者用其跟MMD,DANN作比较,大部分物体的分类结果好于其他方法,Here Insert Picture Description

3、本文提出了基于条件对抗网络的领域自适应方法,英文名叫做Conditional Adversarial Domain Adaptation。

Long M, Cao Z, Wang J, et al. Conditional Adversarial Domain Adaptation. NIPS 2018.
从题目中不难看出,主要由Condition + Adversarial + Adaptation这三部分构成。进行condition的时候,用到了一个叫做multilinear map

4、方法很好,2019,异构网络的迁移:

Learning What and Where to Transfer 2019ICML句句解读
https://blog.csdn.net/qq_38221026/article/details/103260017

Learn What-Where to Transfer,有借鉴下面的文章
Beyond sharing weights for domain adaptation,PAMI文章是两个相同结构的网络彼此迁移,本文是异构网络,更具一般性
有什么用:除了迁移,该方法也可以看作是一种网络的压缩:从大网络迁移到小网络,增强小网络的学习表现。
https://github.com/alinlab/L2T-ww

Here Insert Picture Description

5、Uses a domain critic to minimize the Wasserstein Distance (with Gradient Penalty) between domains.(2017),方法有提升

https://github.com/jvanvugt/pytorch-domain-adaptation
之前用的adda
Here Insert Picture Description

Published 98 original articles · won praise 141 · views 260 000 +

Guess you like

Origin blog.csdn.net/m0_37192554/article/details/104610960