Abstract

Glaucoma is a progressive eye condition that can lead to permanent vision loss. Therefore, on-time detection of glaucoma is critical for making an effective treatment plan. In recent years, enormous attempts have been made to develop automated glaucoma classification systems using CNNs through images. In contrast, limited methods have been proposed for diagnosing different glaucoma stages. It is mainly owing to the lack of large publicly available labeled datasets. Also, fundus images exhibit a high inter-stage resemblance, redundant features and minute size variations of lesions, making the conventional CNNs difficult to classify multiple stages of glaucoma accurately. To address these challenges, this paper proposes a novel adapter and enhanced self-attention based CNN framework named AES-Net for effective classification of glaucoma stages. In particular, we propose a spatial adapter module on top of the backbone network for learning better feature representations and an enhanced self-attention module (ESAM) to capture global feature correlations among the relevant channels and spatial positions. The ESAM assists in capturing stage-specific and detailed-lesion features from the fundus images. Extensive experiments on two multi-stage glaucoma datasets indicate that our AES-Net surpasses CNN-based existing approaches. The Grad-CAM++ visualization maps further confirm the effectiveness of our AES-Net.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call