Abstract

Accurate segmentation of retinal blood vessels can help ophthalmologists diagnose eye-related diseases such as diabetes and hypertension. The task of segmentation of the vessels comes with a number of challenges. Some of the challenges are due to haemorrhages and microaneurysms in fundus imaging, while others are due to the central vessel reflex and low contrast. Encoder-decoder networks have recently achieved excellent performance in retinal vascular segmentation at the trade-off of increased computational complexity. In this work, we use the Anam-Net model to accurately segment retinal vessels at a low computational cost. The Anam-Net model consists of a lightweight convolutional neural network (CNN) along with bottleneck layers in the encoder and decoder stages. Compared to the standard U-Net model and the R2U-Net model, the Anam-Net model has 6.9 times and 10.9 times fewer parameters. We evaluated the Anam-Net model on three open-access datasets: DRIVE, STARE, and CHASE_DB. The results show that the Anam-Net model achieves better segmentation accuracy compared to several state-of-the-art methods. For the DRIVE, STARE, and CHASE DB datasets, the model achieved {sensitivity and accuracy} of {0.8601, 0.9660}, {0.8697, 0.9728}, and {0.8553, 0.9746}, respectively. On the DRIVE, STARE, and CHASE_DB datasets, we also conduct cross-training experiments. The outcome of this experiment demonstrates the generalizability and robustness of the Anam-Net model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.