Abstract

In this work, we present an unsupervised domain adaptation (UDA) method, named Panoptic Domain Adaptive Mask R-CNN (PDAM), for unsupervised instance segmentation in microscopy images. Since there currently lack methods particularly for UDA instance segmentation, we first design a Domain Adaptive Mask R-CNN (DAM) as the baseline, with cross-domain feature alignment at the image and instance levels. In addition to the image- and instance-level domain discrepancy, there also exists domain bias at the semantic level in the contextual information. Next, we, therefore, design a semantic segmentation branch with a domain discriminator to bridge the domain gap at the contextual level. By integrating the semantic- and instance-level feature adaptation, our method aligns the cross-domain features at the panoptic level. Third, we propose a task re-weighting mechanism to assign trade-off weights for the detection and segmentation loss functions. The task re-weighting mechanism solves the domain bias issue by alleviating the task learning for some iterations when the features contain source-specific factors. Furthermore, we design a feature similarity maximization mechanism to facilitate instance-level feature adaptation from the perspective of representational learning. Different from the typical feature alignment methods, our feature similarity maximization mechanism separates the domain-invariant and domain-specific features by enlarging their feature distribution dependency. Experimental results on three UDA instance segmentation scenarios with five datasets demonstrate the effectiveness of our proposed PDAM method, which outperforms state-of-the-art UDA methods by a large margin.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.