SepVAE: A Contrastive VAE for Separating Pathological Patterns from Healthy Patterns

SepVAE: a contrastive VAE to separate pathological patterns from healthy ones

Problem solving: We propose a novel method of contrastive VAEs (CA-VAEs) to distinguish pathological patterns in healthy datasets from patient datasets. Currently, existing models are difficult to effectively prevent information sharing among latent spaces and capture all significant variation factors.

Key idea: The method divides the latent space into a salient feature set (i.e. only applicable to the target dataset) and a common feature set (i.e. present in both datasets). To achieve this goal, the paper proposes two key regularization losses: a disentanglement term between common and saliency representations and a classification term between background and target samples in saliency space. Compared to the current state of research in the field, a new solution is proposed that can better distinguish healthy data from pathological data.

Other highlights: Demonstrate better performance than previous CA-VAEs methods on three medical applications and a natural image dataset (CelebA). Code and datasets are available on GitHub.

Related research: Other recent related research includes: 1) "Variational Autoencoder for Semi-Supervised Anomaly Detection" by Tianyu Pang, Jiaming Mu and Shuai Li, South China University of Technology; 2) "Contrastive Learning for Medical Visual Question Answering ", by Yunqiu Xu, Zhengtao Jiang, and Weimin Zhou, at Nanjing University.

Abstract: SepVAE: a contrastive VAE to distinguish pathological patterns from healthy patterns. Robin Louiset, Edouard Duchesnay, Antoine Grigis, Benoit Dufumier, Pietro Gori. Contrastive Analytical VAEs (CA-VAEs) are a class of variational autoencoders (VAEs) that aim to combine Common variables are separated from factors that are only present in the target dataset. To this end, these methods divide the latent space into a set of salient features (i.e., specific to the target dataset) and a set of common features (i.e., present in both datasets). Currently, all models fail to effectively prevent information sharing among latent spaces and capture all significant variables. To this end, we introduce two key regularization losses: a disentanglement term between common and salient representations and a classification term between background and target samples in the saliency space. We demonstrate better performance than previous CA-VAEs on three medical applications and a natural image dataset (CelebA). The code and dataset are available on GitHub at https://github.com/neurospin-projects/2023rlouisetsepvae.

Guess you like

Origin blog.csdn.net/elinkenshujuxian/article/details/131760312