Abstract

Due to domain shifts, deep cell/nucleus detection models trained on one microscopy image dataset might not be applicable to other datasets acquired with different imaging modalities. Unsupervised domain adaptation (UDA) based on generative adversarial networks (GANs) has recently been exploited to close domain gaps and has achieved excellent nucleus detection performance. However, current GAN-based UDA model training often requires a large amount of unannotated target data, which may be prohibitively expensive to obtain in real practice. Additionally, these methods have significant performance degradation when using limited target training data. In this paper, we study a more realistic yet challenging UDA scenario, where (unannotated) target training data is very scarce, a low-resource case rarely explored for nucleus detection in previous work. Specifically, we augment a dual GAN network by leveraging a task-specific model to supplement the target-domain discriminator and facilitate generator learning with limited data. The task model is constrained by cross-domain prediction consistency to encourage semantic content preservation for image-to-image translation. Next, we incorporate a stochastic, differentiable data augmentation module into the task-augmented GAN network to further improve model training by alleviating discriminator overfitting. This data augmentation module is a plug-and-play component, requiring no modification of network architectures or loss functions. We evaluate the proposed low-resource UDA method for nucleus detection on multiple public cross-modality microscopy image datasets. With a single training image in the target domain, our method significantly outperforms recent state-of-the-art UDA approaches and delivers very competitive or superior performance over fully supervised models trained with real labeled target data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.