Cataracts are common eye disorders characterized by the clouding of the lens, preventing light from passing through and impairing vision. Various factors, including changes in the lens’s hydration or alterations in its proteins, may contribute to their development. Regular eye examinations conducted by an ophthalmologist or optometrist are imperative for detecting cataracts and other ocular conditions early on. Manual checks by caregivers pose several problems, including subjectivity, human error, and a lack of expertise. Biomedical fusion involves combining or linking various characteristics specific to certain diseases from different medical imaging resources. The primary objectives of this approach in disease classification are to reduce the error rate and increase the number of retrieved features. The aim of this study is to evaluate the outcomes associated with fusing visual features related to left and right eye cataract characteristics. Additionally, we investigate the impact of limited variability in deep learning models, specifically in the classification of cataract fundus versus normal fundus images. To address this issue, this study introduces CataractNetDetect, an innovative multi-label deep learning classification system that fuses feature representations from pairs of fundus images (e.g., left and right eyes) for the automatic diagnosis of various ocular disorders. Our focus is on achieving improved performance by stacking discriminative deep feature representations to combine two fundus images into a unified feature representation. Several deep learning architectures are utilized as feature descriptors, including ResNet-50, DenseNet-121, and Inception-V3, enhancing the resilience and quality of representations. Fine-tuning of these DL architectures is conducted using the ImageNet dataset, followed by an integrated stacking approach combining ResNet-50, DenseNet-121, and Inception-V3 models. The model is trained on the publicly available ODIR-5k dataset, which includes 5000 left/right eye images depicting eight different ocular conditions, ranging from healthy states to uncommon ailments such as cataracts, glaucoma, age-related macular degeneration (AMD), diabetes, hypertension, and myopia abnormalities. Moreover, extensive preprocessing of the images is performed, including data augmentation, noise reduction, contrast enhancement, scaling, and circular border cropping. The CataractNetDetect system demonstrates F1-scores, AUC, and maximum validation scores of 98.0%, 97.9%, and 100%, respectively. This ensemble-based model distinguishes itself by surpassing the performance of conventional established methodologies, including ResNet-50, DenseNet-121, and Inception-V3, thereby underscoring its efficacy in diagnostic applications.
Read full abstract