Abstract
Optical Coherence Tomography Angiography(OCTA) vessel segmentation is a challenging task. On the one hand, the complex structure of the capillary networks presents significant obstacles to achieving accurate vessel segmentation. On the other hand, current research on OCTA vessel segmentation heavily relies on high-quality manual annotations, especially in fully-supervised approaches. In contrast, pixel-level annotation of OCTA vessels is time-consuming and labor-intensive. To address these issues, we propose a semi-supervised method called HAIC-Net, which integrates self-supervised learning with a homologous augmented image classification pretext task and dual consistency training with data perturbation consistency and topological connectivity consistency. Firstly, we design a self-supervised homologous augmented image classification pretext task that directs the model’s attention to similar vascular features in homologous augmented images, thereby extracting rich vessel information from unlabeled images and reducing the dependence on manual annotations. Secondly, we introduce a dual-consistency structure with topological connectivity consistency to provide constraint from a topological perspective, which is consistent with the topological characteristics of the vascular network, to enhance the segmentation network’s sensitivity to vessel connectivity and decrease the topological errors in segmentation results. We conduct experiments on two publicly available datasets and one private dataset and validate the state-of-the-art performance of the proposed method. On the ROSE-1 dataset, our method achieves 0.9143 accuracy and 0.7658 dice coefficient, surpassing other current semi-supervised methods and approaching the performance of state-of-the-art fully supervised methods. The same result can also be observed on OCTA500 and our private dataset, demonstrating the effectiveness and superiority of our approach.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.