[论文翻译]Intramodality Domain Adaptation Using Self Ensembling and Adversarial Training

论文pdf下载: 链接

Intramodality Domain Adaptation Using Self Ensembling and Adversarial Training

使用自集成和对抗性训练的模式域内适应

Abstract. Advances in deep learning techniques have led to compelling achievements in medical image analysis.However,performance of neural network models degrades drastically if the test data is from a domain different from training data. In this paper, we present and evaluate a novel unsupervised domain adaptation(DA) framework for semantic segmentation which uses self ensembling and adversarial training methods to effectively tackle domain shift between MR images. We evaluate our method on two publicly available MRI dataset to address two different types of domain shifts: On the BraTS dataset[11] to mitigate domain shift between high grade and low grade gliomas and on the SCGM dataset[13] to tackle cross institutional domain shift. Through extensive evaluation, we show that our method achieves favorable results on both datasets.

摘要随着深度学习技术的进步,医学图像分析取得了令人瞩目的成就。然而,如果测试数据来自与训练数据不同的领域,则神经网络模型的性能会急剧下降。本文提出并评价了一种新的无监督域适应(DA)语义分割框架,该框架利用自集成和对抗性训练方法有效地解决了MR图像间的域转移问题。我们在两个公开的MRI数据集上评估我们的方法,以解决两种不同类型的域转移:在BraTS数据集[11]上缓解高级别和低级别胶质瘤之间的域转移,在SCGM数据集[13]上处理跨机构的域转移。经过广泛的评价,我们的方法在两个数据集上都取得了良好的结果。

1 Introduction

Existence of domain shift between related datasets pose a serious challenge for CNN based tasks like segmentation which require a large amount of annotated data for training. Unlike in the natural images, the problem of domain shift is ubiquitous in biomedical image analysis as images acquired by various institutions belong to different domains due to difference in image acquisition parameters used for capturing data. In addition, tumors and cancers of different grades and severity may belong to different distributions, limiting the ability of single segmentation model in labeling cancerous tumors of varying severity and growth (Figure 1).To tackle this issue, unsupervised domain adaptation has been extensively studied to enable CNN to achieve competitive performance in a domain different than the training domain [19].

相关数据集之间存在的域转移对基于CNN的分割等任务提出了严峻的挑战,这些任务需要大量的带注释的数据进行训练。与自然图像不同的是,在生物医学图像分析中,由于用于捕获数据的图像采集参数不同,不同机构获取的图像属于不同的领域,因此域转移问题是普遍存在的。此外,不同级别和严重程度的肿瘤和癌症可能属于不同的分布,限制了单分割模型标记不同严重程度和生长的肿瘤的能力(图1)。为了解决这一问题,无监督域适应被广泛研究,以使CNN在不同于训练域[19]的领域获得竞争性能。

In this paper, we study intramodality domain adaptation where both source and target domains belong to same modality, but have different distributions due to difference in image acquisition parameters or tumor severity. Intramodality domain shift is often neglected in biomedical image analysis as most of the deep learning based networks are trained and tested on a mixture of data collected from different institutions and devices, disregarding the associated domain shift. This often results in unpredictable performance if test set is from a data source different than training.

在本文中,我们研究了由于图像采集参数或肿瘤严重程度的不同,源域和目标域属于同一模态但分布不同的模式内适应。在生物医学图像分析中,由于大多数基于深度学习的网络是在从不同机构和设备收集的混合数据上进行训练和测试的,忽略了相关的域移动,因此通常忽略了域内移动。如果测试集来自与训练不同的数据源,则这通常会导致不可预测的性能。

Numerous unsupervised domain adaptation methods have been proposed in the literature, with a growing emphasis on learning domain invariant representation to implicitly learn the feature mapping between domains [19]. These methods can be broadly classified as divergence minimising methods [10, 3, 17] which propose to minimise the distribution statistics between domains and adversarial methods [20, 5, 16] which use discriminators for aligning feature spaces. In contrast, French et.al [4] employed self-ensembling for domain adaptation and achieved state-ofthe-art results on VisDA-2017 domain adaptation challenge. This technique is based on the Mean-Teacher Network [18] introduced for semi-supervised learning and requires extensive task-specific data augmentation. Additionally, pixel space translation [2] and modulating batchnorm statistics [9] are also explored in detail for domain adaptation and achieved promising results [19].

文献中提出了大量的无监督域自适应方法,越来越强调学习域不变表示来隐式学习域[19]之间的特征映射。这些方法可以大致分为散度最小化方法[10,3,17]和对抗性方法[20,5,16],前者提出最小化域之间的分布统计量,后者使用鉴别器对特征空间进行对齐。相比之下,法国et。al[4]使用自集成进行领域适应,并在VisDA-2017领域适应挑战中取得了最新的成果。该技术基于半监督学习引入的均值-教师网络[18],需要大量任务特定的数据扩充。此外,还对像素空间平移[2]和调制batchnorm统计量[9]进行了详细的领域自适应研究,取得了良好的结果[19]。

图1:BraTS数据集中的肿瘤大小变异性。顶行:高分级(HGG)肿瘤的轴向切片,底行:低分级(LGG)肿瘤的轴向切片。在Ground Truth(GT)中,所有颜色的联合=全肿瘤,绿色=增强肿瘤,蓝色=核心肿瘤。HGG和LGG在肿瘤区域的大小和分布不同。

In biomedical imaging, Kamnitsas et.al’s [7] work on brain lesion MRI domain adaptation using adversarial training demonstrated the effectiveness of adversarial loss for unsupervised domain adaptation on medical datasets. The latest study on medical data that is closely related to our work is [12], which performed unsupervised domain adaptation using self ensembling techniques for spinal cord grey matter segmentation and achieved promising results.

在生物医学成像中,Kamnitsas等人利用对抗性训练对脑损伤MRI域适应进行了[7]研究,证明了对抗性损失对医学数据集无监督域适应的有效性。与我们的工作密切相关的最新医学数据研究是[12],它使用自集成技术对脊髓灰质进行了无监督域适应,并取得了良好的结果。

Current research trends in domain adaptation are directed towards combining multiple techniques to achieve superior performance in various computer vision tasks [6, 15]. Following this direction, we propose a combined network which uses domain invariant feature training with self ensembling technique for MRI domain adaptation in the context of semantic segmentation. We demonstrate the performance of our method on two publicly available MRI datasets: 1) On BraTS [11, 1] dataset for multiclass tumor segmentation using high grade to low grade glioma domain adaptation, 2) On SCGM [13] Segmentation dataset for grey matter segmentation using cross institutional DA. To the best of our knowledge, our work here is the first to perform high grade to low grade glioma domain adaptation and the first one to use a combination of self-ensembling and adversarial training for medical image domain adaptation.

目前在领域适应方面的研究趋势是将多种技术相结合,以在各种计算机视觉任务中获得优异的性能[6,15]。在此基础上,我们提出了一种结合领域不变特征训练和自集成技术的核磁共振领域自适应语义分割组合网络。我们展示了我们的方法在两个公开的MRI数据集上的性能:1)在BraTS[11,1]数据集上,使用高级别到低级别的胶质瘤域适应性进行多级肿瘤分割;2)在SCGM[13]数据集上,使用跨机构DA进行灰质分割。据我们所知,我们的工作是第一个进行高级别到低级别胶质瘤域适应的工作,也是第一个结合自集成和对抗训练来进行医学图像域适应的工作

2 Methodology

2.1 Overview of the Proposed Model

Our domain adaptation network consists of three modules as shown in Figure2: A student segmentation network G, a teacher segmentation network G and a discriminator D. First, we forward source images with labels through segmentation network G and update its weights. Then we pass unlabeled target images through G and obtain its pre-softmax layer predictions. Predictions from both the domains are passed through discriminator D to distinguish whether the input belongs to source or target domain. Adversarial loss from D is then back-propagated through G to update network weights to learn domain invariant feature representation. Teacher network G weights are then updated as the exponential moving average (EMA) of student network(G) weights. Finally, we compute consistency loss between student and teacher networks predictions for target images and back-propagate through student network(G). Figure 2 illustrates the proposed algorithm.

我们的域自适应网络由三个模块组成,如图2所示:学生分割网络G、教师分割网络G和鉴别器d。然后我们通过G传递未标记的目标图像,得到它的pre-softmax层预测。来自这两个域的预测通过鉴别器D来区分输入是属于源域还是目标域。然后通过G反向传播来自D的不利损失,更新网络权值,学习域不变特征表示。然后将教师网络G权值更新为学生网络G权值的指数移动平均(EMA)。最后,我们计算了学生网络和教师网络对目标图像预测的一致性损失,并通过学生网络(G)进行了反向传播。图2说明了所提出的算法。学生和教师网络对目标图像的预测和通过学生网络的反向传播的一致性损失(G)。图2说明了所提出的算法。

图2:我们提出的架构。绿色箭头表示源数据,红色箭头表示目标数据。教师网络的权重通过EMA更新。

2.2 Adversarial Training 对抗训练

The objective behind adversarial training is to adapt the segmentation network invariant to variations between source and target. This is achieved by using a fully convolutional discriminator network(D) to distinguish the domain of input data. D is trained with a cross entropy loss using source and target domain predictions. For target images predictions, we compute an adversarial loss(Ladv) and back-propagate it to segmentation network(G) to fool the discriminator by pushing the feature representation to a domain invariant space.

对抗性训练的目的是使分割网络适应源和目标的变化。这是通过使用全卷积鉴别器网络(D)来区分输入数据的域来实现的。利用源域和目标域的预测,用交叉熵损失对D进行训练。对于目标图像的预测,我们计算一个对抗损失(Ladv)并将其反向传播到分割网络(G),通过将特征表示推到一个域不变空间来欺骗鉴别器。

2.3 Self Ensembling and Mean Teacher 自我集合体和刻薄的老师

We combine adversarial training with self ensembling using Mean-Teacher in our network. Although initial self ensembling papers [8, 18] were specifically designed for semi-supervised learning, French et.al extended mean-teacher algorithm for UDA in his seminal paper [4]. Their proposed architecture consists of a student network and a teacher network where the student network is trained with back-propagation while the teacher network weights are an exponential moving average of student network weights. We use self ensembling as a regularizer to smoothen the weights of our feature space domain adaptation network. Student network weights are updated by task loss and adversarial loss which is then exponentially averaged over time to update teacher network weights. We finally use teacher network for making predictions. For our mean teacher self ensembling model, we use the same architecture proposed by [4].

在我们的网络中,我们将对抗训练与自我整合相结合,使用吝啬教师。尽管最初的自我集成论文[8,18]是专门为半监督学习设计的,但French et.al在其开创性论文[4]中为UDA设计了扩展mean-teacher算法。他们提出的体系结构包括学生网络和教师网络,其中学生网络通过反向传播进行训练,而教师网络权值是学生网络权值的指数移动平均。我们使用自集成作为正则化器来平滑我们的特征空间域适应网络的权值。学生网络的权重通过任务损失和对抗损失来更新,然后按时间指数平均更新教师网络的权重。最后利用教师网络进行预测。对于我们的均值教师自我集成模型,我们使用了[4]提出的相同架构。


C.Perone et.al [12] has adapted and implemented this network for domain adaptation for medical imaging segmentation and achieved favorable results. A key difference between their work and ours is that their model uses only self ensembling for domain adaptation while we combine it with adversarial training as a regularizer for feature-space domain adaptation.

C。Perone等人将该网络应用于医学图像分割的领域自适应,取得了良好的效果。他们的工作与我们的工作的一个关键区别是,他们的模型仅使用自集成进行域适应,而我们将其与对抗训练相结合,作为特征空间域适应的正则化器。

2.4 Objective Function

With the proposed network, we formulate the final loss function for domain adaptation as follows:

在提出的网络中,我们将域自适应的最终损失函数表示为:

where Is, and It are inputs from source and target domains respectively. Ltask(Is) is the segmentation task loss computed on the paired input data. We use dice loss for segmentation which is commonly employed in biomedical image segmentation due to its low sensitivity to class imbalance. Adversarial loss Ladv(It) is computed as a cross entropy loss on target images to adversarially align feature representation of both domains. Consistency loss Lcons(It) measures the difference between predictions from teacher and student networks for distilling the knowledge on the student model for self ensembling. We use mean squared error(MSE) for Lcons(It) as suggested by [12]. Additionally, discriminator network is trained using source and target feature representations using a standard cross-entropy discriminator loss (Ldisc(Is; It)).

其中为,分别为来自源域和目标域的输入。Ltask(Is)是根据成对的输入数据计算出的分割任务损失。我们使用骰子损失分割,这是常用的生物医学图像分割由于其低灵敏度的类不平衡。通过计算目标图像的交叉熵损失来对这两个域的特征表示进行反向对齐。一致性损失Lcons(It)测量来自教师和学生网络的预测之间的差异,以提取关于学生自我集成模型的知识。我们使用均方误差(MSE)来表示Lcons(It),正如[12]所建议的那样。此外,使用标准的交叉熵鉴别器损耗(Ldisc, is),利用源和目标特征表示对鉴别器网络进行训练;))。

2.5 Model Architecture

Discriminator Network : For Discriminator, we use a fully convolutional neural network consisting of four convolutional layers with 4 × 4 kernels and stride of 2. Except for the last layer, each convolution layer is followed by a leaky ReLU parameterized by 0.2. Discriminator is trained with Adam as optimizer with default set of parameters and a polynomial decay function for learning rate.

鉴别器网络:对于鉴别器,我们使用一个全卷积神经网络,它由4个卷积层组成,4×4个核,步长为2。除最后一层外,每个卷积层后面都有一个参数为0.2的漏电ReLU。识别器训练与亚当作为优化器与默认的一组参数和多项式衰减函数的学习率。

Segmentation Network :We use UNet [14] as our segmentation network with 15 layers, batch normalization and dropout. Network is trained using Adam as optimizer with β1 = 0:9 and β2 = 0:99. Both student and teacher networks have identical UNet architecture and only student network weights are updated by back-propagation. Performance of the model is validated using teacher network on validation data from both domains

分割网络:我们使用UNet[14]作为我们的分割网络,有15层,批量归一化和dropout。网络是训练用亚当作为优化β1 = 0:9β2 = 0:99。学生网络和教师网络具有相同的UNet体系结构,只有学生网络的权值通过反向传播进行更新。使用教师网络对来自两个领域的验证数据进行了性能验证

3 Datasets

We used two publicly available MRI datasets to evaluate our methodology. We performed HGG to LGG domain adaptation on BraTS dataset [11, 1] and cross institutional domain adaptation on SCGM segmentation challenge dataset [13]

我们使用两个公开的MRI数据集来评估我们的方法。我们对BraTS数据集[11,1]进行了HGG对LGG域的适应,对SCGM挑战数据集[13]进行了跨机构域的适应.

BraTS 2018 [11, 1] dataset consists of 285 MRI samples (210 HGG and 75 LGG) each with T1, T1-contrast enhanced, T2-weighted and FLAIR volumes with ground truth voxel-wise labels for enhancing tumor, peritumoral edema and necrotic and non-enhancing tumor core. Both HGG and LGG volumes are splitted into train and test and we use train HGG as source and train LGG as target for domain adaptation experiments. Since we are using 2D-Unet for segmentation, we slice 3D voxels into 2D axial slices of 128 × 128 and concatenated all four MRI modalities to get a 4-channel input. More information about dataset can be found at [11].

BraTS 2018[11, 1]的数据集包括285个MRI样本(210 HGG和75 LGG),每个样本都包含T1、T1增强、t2加权和FLAIR值,并带有ground truth体素标记,用于增强肿瘤、瘤周水肿、坏死和非增强的肿瘤核心。HGG卷和LGG卷都被分割到train和test中,我们使用train HGG作为源,train LGG作为目标进行域适应实验。由于我们使用2D- unet进行分割,我们将3D数据切片为128×128的二维轴向切片,并将所有4种MRI模式连接起来,得到4通道输入。有关数据集的更多信息可以在[11]中找到。

Spinal Cord Gray Matter Challenge(SCGM) [13] dataset contains single channel Spinal Cord MRI data with grey matter labels from 4 different centers. Data is collected from four centers (UCL, Montreal, Zurich, Vanderbilt) using three different MRI systems (Philips Acheiva, Siemens Trio, Siemens Skyra) with institution specific acquisition parameters. From each center, 10 MRI volumes are publicly available which we center cropped 2D axial slices of 200 × 200 for our experiment. We use our network to perform cross institutional domain adaptation on this dataset with centers 3 and 1 as source and center 2 as target and validate the performance on all four centers.

脊髓灰质挑战(SCGM)[13]数据集包含来自4个不同中心的带有灰质标签的单通道脊髓MRI数据。数据收集来自四个中心(伦敦大学学院、蒙特利尔、苏黎世、范德比尔特),使用三种不同的MRI系统(飞利浦Acheiva、西门子Trio、西门子Skyra),具有机构特定的采集参数。每个中心有10个公开的MRI卷,我们从中选取了200×200的二维轴向切片进行实验。我们使用我们的网络,以中心3和中心1为来源,中心2为目标,对这个数据集执行跨机构域的调整,并验证所有四个中心的性能。

4 Experiments and Results

In this section, we present experimental results to validate the proposed domain adaptation method for semantic segmentation on both datasets. First we evaluate model performance on SCGM dataset for cross institutional domain adaptation. Second, we carry out experiments for HGG to LGG domain adaptation on BraTS dataset. We also conduct extensive experiments and ablation studies on both dataset to substantiate the efficacy of our proposed architecture. For a fair comparison and analysis, all experiments are run for the same number of epochs with the same set of parameters for optimizers and learning rate decay. Model performance is evaluated using the dice coefficient. For each dataset we conduct the following experiments

在本节中,我们提供了实验结果来验证所提出的在两个数据集上进行语义分割的领域自适应方法。首先,我们评估了SCGM数据集上跨机构领域适应的模型性能。其次,在BraTS数据集上进行了HGG到LGG域的自适应实验。我们还对这两个数据集进行了大量的实验和消融研究,以证实我们提出的架构的有效性。为了公平的比较和分析,所有的实验都是在相同数量的epoch下进行的,并且优化器和学习速率衰减的参数设置是相同的。模型性能的评估使用骰子系数。对于每个数据集,我们进行以下实验

1. Training the segmenter network (with no DA) on combined source and target data and test separately on heldout sets (super-all).
2. Training the segmenter network (with no DA) on source data alone and test separately on source and target (super-source).
3. Domain adaptation using only adversarial training (da-adv).
4. Domain adaptation using only self ensembling (da-ensemble).
5. Proposed domain adaptation algorithm with both adversarial training and self-ensembling(da-combined)

1. 在源数据和目标数据的组合上训练分段网络(无DA),并在辅助集(super-all)上分别测试。
2. 单独对源数据训练分段网络(无DA),分别对源和目标(超源)进行测试。
3.仅使用对抗训练(da-adv)进行域适应。
4. 仅使用自集成的领域适应(da-ensemble)。
5. 对抗训练与自集成相结合的域适应算法

4.1 Spinal Cord Cross Institutional Domain Adaptation脊髓跨机构域适应

All networks for cross institutional DA are trained for 350 epochs with centers 3 and 1 as source and center 2 as target. Weights for adversarial and consistency losses(λadv; λcons) are optimized separately using da-adv and da-ensemble models. We found λadv = 0:001 and λcons = 2 to have best performance on individual domain adaptation models and used them for the combined DA model as well.

所有跨机构DA的网络都以中心3和中心1为来源,中心2为目标,为350个epoch进行培训。权重敌对的和一致性的损失(λadv;λcons)分别优化使用da-adv和da-ensemble模型。我们发现λadv = 0:001λcons = 2最佳性能在各个领域适应模型,用它们的组合DA模型。

We present experimental results for cross-institutional domain adaptation in Table 1. Combined supervised model achieved similar dice scores on all heldout sets while source-only supervised model produced poor results for center 2. This substantiates the existence of intramodality domain shift among multi institutional MRI data and validates the importance of medical image domain adaptation. In contrast, all domain adaptation networks achieved improved results on center2, showing the effectiveness of DA techniques in mitigating domain shift. Our proposed model achieved highest dice score on 3 out of 4 centers and produced results on par with supervised training using combined data. Figure 3 presents some example results for adapted segmentation using combined model.Although domain adaptation models are adversarially trained against center2,model performance has improved for all centers. This suggests that DA with the proposed architecture can be used for domain generalisation as well.

我们在表1中给出了跨机构领域适应的实验结果。联合监督模型在所有的辅助集上都获得了相似的骰子点数,而仅使用源监督模型的中心2的结果较差。这证实了多机构MRI数据间存在模内域移位,验证了医学图像域自适应的重要性。与之相反,所有的域适应网络在center2上都取得了改进的结果,显示出DA技术在减缓域转移方面的有效性。我们提出的模型在4个中心中的3个中心获得了最高的骰子分数,并使用组合数据产生了与监督训练相同的结果。图3显示了使用组合模型进行自适应分割的一些示例结果。尽管域适应模型是针对center2进行反向训练的,但所有中心的模型性能都得到了提高。这表明,DA和所提出的体系结构也可以用于域泛化。

4.2 Brain Tumor Segmentation using Domain Adaptation

We trained all experiments for 150 epochs with HGG as source and LGG as target. Networks are trained with 4-channel sliced 2D axial MRI images to perform 4-class segmentation (background, enhanced tumor, whole tumor and core tumor). Performance scores for all experiments with class wise dice scores are presented in 2. Supervised model results clearly show the domain shift between high grade and low grade gliomas in BraTS dataset. LGG heldout set produced inferior results when the network is trained only using HGG volumes. Our proposed domain adaptation method mitigated this domain shift to an extent and achieved noticeable improvement in segmenting whole and core tumor regions in LGG dataset.

我们以HGG为源,LGG为目标,对150个epoch的所有实验进行了训练。利用4通道切片的二维轴向MRI图像对网络进行训练,进行4类分割(背景、增强肿瘤、全肿瘤、核心肿瘤)。用类明智的骰子分数的所有实验的性能分数在2中被提出。监督模型结果清楚地显示了BraTS数据集中高级别和低级别胶质瘤之间的区域转移。当仅使用HGG卷对网络进行训练时,LGG heldout集产生的结果较差。我们提出的域适应方法在一定程度上缓解了这种域转移,并在LGG数据集中对整个和核心肿瘤区域的分割上取得了明显的改进。

5 Conclusion

In this paper, we presented a novel approach to intra-modality domain adaptation using adversarial training and self ensembling. We evaluated our model on two publicly available MRI datasets to address cross institutional domain shift and tumor severity domain shift. The results showed improved segmentation performance on both datasets. Superior performance on two different datasets validates the generalisability of our proposed model which can be extended to other intra-modality DA applications for biomedical image segmentation. Future work includes extensive hyperparameter tuning for improved segmentation for unsupervised domain adaptation.

在这篇论文中,我们提出了一种利用对抗训练和自集成来实现模态域内自适应的新方法。我们在两个公开的MRI数据集上评估我们的模型,以解决跨机构域转移和肿瘤严重程度域转移。结果表明,改进的分割性能的两个数据集。在两个不同的数据集上的优越性能验证了我们提出的模型的通用性,该模型可以扩展到其他用于生物医学图像分割的模态DA应用。未来的工作包括广泛的超参数调整,以改善分割的非监督域适应。

发布了53 篇原创文章 · 获赞 40 · 访问量 4万+

猜你喜欢

转载自blog.csdn.net/weixin_40519315/article/details/104675705