Abstract
Due to different retinal fundus image acquisition devices having various imaging principles, domain shift often occurs between different datasets. Hence, a segmentation network well-trained on one dataset (i.e., source domain) usually obtains very poor performance on another dataset (i.e., target domain), which results in us having to annotate the new dataset (target domain) to train the segmentation network again. However, annotating a new dataset is usually time-consuming and laborious. To address this problem, we proposed a novel unsupervised domain adaptation method for optic disc and cup segmentation. To be specific, we first utilized a domain adaptation method based on self-ensembling to effectively align the features of the source domain and target domain. Then, we designed a novel backbone network (MBU-Net) to make full use of the mask and boundary information to improve the segmentation performance of self-ensembling. Finally, we proposed an output-level adversarial domain adaptation (OADA) to address the domain shift problem of the structured output space in self-ensembling. In experiments, we test our proposed method on three different target domain datasets including Target Domain 1 (RIM-ONE_r3 dataset), Target Domain 2 (Drishti-GS dataset) and Target Domain 3 (REFUGE dataset). The experimental results demonstrate that our proposed method outperforms the compared state-of-the-art methods in the optic disc and cup segmentation tasks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Engineering Applications of Artificial Intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.