Abstract

Image segmentation is a long-standing problem in medical image analysis to facilitate the clinical diagnosis and intervention. Progress has been made owing to deep learning via supervised training with elaborate human labelling, however, the segmentation models trained by the labeled source domain cannot perform well in the target domain, making existing approaches lack robustness and generalization ability. Considering the acquisition of medical image labels is quite expensive and time-consuming, we propose a novel feature disentanglement-based unsupervised domain adaptation (UDA) method to improve the robustness of the trained model in the target domain. A segmentation network is designed to learn disentangled features with two parts: I. content-related features, which are responsible for the segmentation task and invariant across domains; II. style-related features, which elucidate the discrepancy between different domains. Feature disentanglement (FD) is achieved by multi-task learning and image translation. Meanwhile, knowledge distillation is introduced to improve the performance on fine-grained segmentations. And for objects with regular shape, we incorporate the adversarial training to predict shape-invariant segmentation masks across domains. Comprehensive experiments are conducted on retina vessel segmentation and sinus surgical instrument segmentation to validate the effectiveness of the proposed method. The average Dice of twenty regular transfer directions achieves 79.26% on five public benchmarks of retina vessel segmentation, the average Dice of two transfer directions from regular to UWF attains 72.63%, and the Dice from cadaveric images to live images reaches 68.1% on sinus surgical instrument segmentation. The results demonstrate that the proposed method achieves the state-of-the-art segmentation performance in the UDA setting.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call