Abstract

Research on optical coherence tomography angiography (OCTA) images has received extensive attention in recent years since it provides more detailed information about retinal structures. The automatic segmentation of retinal vessel (RV) has become one of the key issues in the quantification of retinal indicators. To this end, there are various methods proposed with cutting-edge designs and techniques in the literature. However, most of them only learn features from single-modal data, despite the potential relation between data from different modalities. Clinically, 2D projection maps are more convenient for doctors to observe. Nevertheless, 3D volumes preserve the intrinsic retinal structure. We thus propose a novel multi-modal feature mutual learning framework that contains local mutual learning and global mutual learning capturing 2D and 3D information. In the framework, the 3D model and 2D model learn collaboratively and teach each other throughout the training process. Experimental results show that our method outperforms previous deep-learning methods in RV segmentation. The generalization experiments on the ROSE dataset demonstrate the portability and scalability of the proposed framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call