[Paper Express] WACV2023 - CellTranspose: Small Sample Domain Adaptation for Cell Instance Segmentation

[Paper Express] WACV2023 - CellTranspose: Small Sample Domain Adaptation for Cell Instance Segmentation

【Original text】 : CellTranspose: Few-shot Domain Adaptation for Cellular Instance Segmentation

获取地址:https://openaccess.thecvf.com/content/WACV2023/papers/Keaton_CellTranspose_Few-Shot_Domain_Adaptation_for_Cellular_Instance_Segmentation_WACV_2023_paper.pdf

Blogger keywords: small sample learning, semantic segmentation, domain adaptation

Recommended related papers:

- 无

Summary:

Automatic cell instance segmentation is a process that has accelerated biological research over the past two decades, and recent advances have yielded higher-quality results with less effort on the part of biologists. Most current efforts focus on excluding researchers entirely by generating highly generalizable models. However, these models always fail when faced with new data, which has a different distribution than the data used for training. In this work, rather than approaching the problem with an approach that assumes the availability of large amounts of target data and computational power for retraining, we address the larger problem of designing a method that requires a minimum of new annotated data and training time. challenge. To this end, we design a specialized contrastive loss, which very conveniently utilizes few annotated samples. Extensive results show that 3 to 5 annotations lead to models with accuracies that: 1) significantly mitigate covariate shift effects; 2) match or exceed other adaptation methods; 3) even with methods fully retrained on the target distribution. Adaptive training takes only a few minutes, paving the way for a balance between model performance, computational demands, and the need for expert-level annotation.

Introduction:

As our field matures, automated analysis of scientific imaging data through computer vision techniques is becoming increasingly compelling. To accelerate scientific discovery, neural network-based methods have recently been developed to automatically segment and count individual cell instances in laboratory-generated imaging data [44,51,14]. This type of data acquisition exhibits significant variability due to the variety of imaging modalities used, different types of tissue and how they were processed.

Current task-specific cell instance segmentation methods are mainly based on supervised learning. They are trained on large datasets, trying to compensate for the diversity of the new data they will be working with. However, the new data to be processed will most likely not be distributed in the same way as the data used to train the model, and as such, they will perform the task, often with disappointing accuracy. To address this covariate shift problem [42], the obvious solution is to retrain the model, which is expensive and time-consuming since it requires manual annotation of large amounts of new target data, which we are looking to automate. Another approach is to use domain adaptation methods, which try to adapt the model to the target data distribution. Current domain-adaptive methods for segmentation are largely adapted to imaging modalities, or specific tasks or applications that are very different from cell instance segmentation [15, 53]. Some promising works have addressed this problem in an unsupervised manner [26, 27]. But these methods all assume that there is a large amount of target data, and relatively intensive training can be performed to adapt the model.

In this work, we advocate a more practical and scalable solution to the need to generalize well beyond distributions. We assume that the model used to segment instances (e.g. cell bodies, membranes or nuclei) has been trained on the source dataset. Then, by annotating only a few samples of the target dataset, we tune the model with a low training budget to generalize well on the new distribution. We introduce CellTranspose, a new method implementing the paradigm just described, for few-shot supervised adaptation of cell instance segmentation. The method builds on state-of-the-art models, and we introduce new losses and training procedures for fast adaptation to new data.

Our framework allows appropriate segmentation of a wide range of data, beyond the capabilities of current general methods. We show that model learning to generate high-fidelity segmentations requires only a small number of annotations on the target dataset, and demonstrate this on 2-D and 3-D data. In particular, few annotated samples are sufficient to achieve a comparable level of adaptation to unsupervised adaptation models. Furthermore, CellTranspose provides a faster training protocol compared to training a model from scratch with similar accuracy.

insert image description here

Fig. 1. Variability in microscopy data. The image samples highlight the variability in cell images. From left to right: human U20S cells, stained with Hoechst and phalloidin, from BBBC006 [29]; neuroblastoma cells labeled with phalloidin and DAPI from Cell Image Library [60]; GI tissue cell CODEX consortium from Tissuenet [14] Detection of imaging; breast cancer cells, stained using hematoxylin and eosin (H&E) from TNBC [34]

insert image description here

Fig. 2. Architecture of the method. A comprehensive illustration of our method for few-shot cellular instance segmentation based on contrastive learning.

【Community Visit】

img【Paper Express | Featured】

imgRead the original article Visit the community

https://bbs.csdn.net/forums/paper

Guess you like

Origin blog.csdn.net/qq_36396104/article/details/128840837