Abstract
Unsupervised domain adaptation (UDA) has received interest as a means to alleviate the burden of data annotation. Nevertheless, existing UDA segmentation methods exhibit performance degradation in fine intracranial vessel segmentation tasks due to the problem of structure mismatch in the image synthesis procedure. To improve the image synthesis quality and the segmentation performance, a novel UDA segmentation method with structure preservation approaches, named StruP-Net, is proposed. The StruP-Net employs adversarial learning for image synthesis and utilizes two domain-specific segmentation networks to enhance the semantic consistency between real images and synthesized images. Additionally, two distinct structure preservation approaches, feature-level structure preservation (F-SP) and image-level structure preservation (I-SP), are proposed to alleviate the problem of structure mismatch in the image synthesis procedure. The F-SP, composed of two domain-specific graph convolutional networks (GCN), focuses on providing feature-level constraints to enhance the structural similarity between real images and synthesized images. Meanwhile, the I-SP imposes constraints on structure similarity based on perceptual loss. The cross-modality experimental results from magnetic resonance angiography (MRA) images to computed tomography angiography (CTA) images indicate that StruP-Net achieves better segmentation performance compared with other state-of-the-art methods. Furthermore, high inference efficiency demonstrates the clinical application potential of StruP-Net. The code is available at https://github.com/Mayoiuta/StruP-Net .
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.