Abstract
The fundus images of patients with Diabetic Retinopathy (DR) often display numerous lesions scattered across the retina. Current methods typically utilize the entire image for network learning, which has limitations since DR abnormalities are usually localized. Training Convolutional Neural Networks (CNNs) on global images can be challenging due to excessive noise. Therefore, it's crucial to enhance the visibility of important regions and focus the recognition system on them to improve accuracy. This study investigates the task of classifying the severity of diabetic retinopathy in eye fundus images by employing appropriate preprocessing techniques to enhance image quality. We propose a novel two-branch attention-guided convolutional neural network (AG-CNN) with initial image preprocessing to address these issues. The AG-CNN initially establishes overall attention to the entire image with the global branch and then incorporates a local branch to compensate for any lost discriminative cues. We conduct extensive experiments using the APTOS 2019 DR dataset. Our baseline model, DenseNet-121, achieves average accuracy/AUC values of 0.9746/0.995, respectively. Upon integrating the local branch, the AG-CNN improves the average accuracy/AUC to 0.9848/0.998, representing a significant advancement in state-of-the-art performance within the field.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.