Abstract
Domain Adaption tasks have recently attracted substantial attention in computer vision as they improve the transferability of deep network models from a source to a target domain with different characteristics. A large body of state-of-the-art domain-adaptation methods was developed for image classification purposes, which may be inadequate for segmentation tasks. We propose to adapt segmentation networks with a constrained formulation, which embeds domain-invariant prior knowledge about the segmentation regions. Such knowledge may take the form of anatomical information, for instance, structure size or shape, which can be known a priori or learned from the source samples via an auxiliary task. Our general formulation imposes inequality constraints on the network predictions of unlabeled or weakly labeled target samples, thereby matching implicitly the prediction statistics of the target and source domains, with permitted uncertainty of prior knowledge. Furthermore, our inequality constraints easily integrate weak annotations of the target data, such as image-level tags. We address the ensuing constrained optimization problem with differentiable penalties, fully suited for conventional stochastic gradient descent approaches. Unlike common two-step adversarial training, our formulation is based on a single segmentation network, which simplifies adaptation, while improving training quality. Comparison with state-of-the-art adaptation methods reveals considerably better performance of our model on two challenging tasks. Particularly, it consistently yields a performance gain of 1-4% Dice across architectures and datasets. Our results also show robustness to imprecision in the prior knowledge. The versatility of our novel approach can be readily used in various segmentation problems, with code available publicly.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.