Abstract

Semantic segmentation of mitochondria is essential for electron microscopy image analysis. Despite the great success achieved using supervised learning, it requires a large amount of expensive per-pixel annotations. Recent studies have proposed to exploit similar but annotated domains by domain adaptation, but the possible severe domain shift poses a challenge for the model transfer. In this study, we develop an unsupervised domain adaptation method to adapt the model trained on an labeled source domain to the unlabeled target domain. Specifically, we achieve cross-domain segmentation by integrating geometrical cues provided by the annotated labels and the visual cues latent in images of both domains in a framework of adversarial domain adaptive multi-task learning. Rather than enforcing manually-defined shape priors, we propose to learn geometrical cues from the source domain through adversarial learning. Domain-invariant and discriminative features are learned through joint adaptation. Extensive ablations, parameter analysis and comparisons have been conducted on three benchmarks under various settings. The experiments show that our method performs favorably against state-of-the-art methods both in segmentation accuracy and visual quality.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.