Abstract
Optical coherence tomography angiography (OCTA) is an advanced imaging technology that can present the three-dimensional (3D) structure of retinal vessels (RVs). Quantitative analysis of retinal vessel density and foveal avascular zone (FAZ) area is of great significance in clinical diagnosis, and the automatic semantic segmentation at the pixel level helps quantitative analysis. The existing segmentation methods cannot effectively use the volume data and projection map data of the OCTA image at the same time and lack the trade-off between global perception and local details, which lead to problems such as discontinuity of segmentation results and deviation of morphological estimation. In order to better assist physicians in clinical diagnosis and treatment, the segmentation accuracy of RVs and FAZ needs to be further improved. In this work, we propose an effective retinal image projection segmentation network (RPS-Net) to achieve accurate RVs and FAZ segmentation. Experiments show that this network exhibits good performance and outperforms other existing methods. Our method considers three aspects. First, we use two parallel projection paths to learn global perceptual features and local supplementary details. Second, we use the dual-way projection learning module to reduce the depth of the 3D data and learn image spatial features. Finally, we merged the two-dimensional features learned from the volume data with the two-dimensional projection data, and used a U-shaped network to further learn and generate the final result. We validated our model on the OCTA-500, which is a large multi-modal, multi-task retinal dataset. The experimental results showed that our method achieved state-of-the-art performance; the mean Dice coefficients for RVs are 89.89 ± 2.60 (%) and 91.40 ± 9.18 (%) on the two subsets, while the Dice coefficients for FAZ are 91.55 ± 2.05 (%) and 97.80 ± 2.75 (%), respectively. Our method can make full use of the information of 3D data and 2D data to generate segmented images with higher continuity and accuracy. Code is available at https://github.com/hchuanZ/MFFN/tree/master.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.