Abstract

Glaucoma is a typical eye disorder that induces damage to the optic nerve due to increased intraocular pressure. It ultimately leads to partial or complete blindness without clinical reversal. Hence, it is of utmost importance to screen and detect glaucoma at an early stage. Most of the earlier glaucoma diagnosis methods rely on manual feature engineering, which is time-consuming and requires domain experts. Although recent methods, particularly, convolutional neural networks (CNNs) facilitate learning high-level feature representations from fundus images, they need extensive parameters and pose overfitting issues due to insufficient training samples. Further, conventional CNNs often ignore the minute changes in the lesion region. To overcome these issues, we propose a lightweight multi-scale CNN architecture called as CDAM-Net for effective glaucoma identification from retinal fundus images. Additionally, we introduce an attention module called channel shuffle dual attention (CSDA), comprising of a channel attention block, a spatial attention block and a channel shuffle unit, to focus on important regions in the fundus images, thereby extracting class-specific features. The CDAM-Net mainly consists of multi-scale feature representation (MFR) blocks that enable the extraction of multi-scale features from fundus images. Each MFR block is followed by a CSDA module, which further helps enrich the feature representation. The CDAM-Net is evaluated on a retinal fundus image (RFI) dataset containing 1426 fundus images (837 glaucoma and 589 normal), and the results indicate that CDAM-Net yields promising classification performance compared to existing techniques. Also, ablation studies are carried out to test the effectiveness of each component of the CDAM-Net.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.