Abstract

Diabetic retinopathy (DR) is a prevalent cause of global visual impairment, contributing to approximately 4.8% of blindness cases worldwide as reported by the World Health Organization (WHO). The condition is characterized by pathological abnormalities in the retinal layer, including microaneurysms, vitreous hemorrhages, and exudates. Microscopic analysis of retinal images is crucial in diagnosing and treating DR. This article proposes a novel method for early DR screening using segmentation and unsupervised learning techniques. The approach integrates a neural network energy-based model into the Fuzzy C-Means (FCM) algorithm to enhance convergence criteria, aiming to improve the accuracy and efficiency of automated DR screening tools. The evaluation of results includes the primary dataset from the Shiva Netralaya Centre, IDRiD, and DIARETDB1. The performance of the proposed method is compared against FCM, EFCM, FLICM, and M-FLICM techniques, utilizing metrics such as accuracy in noiseless and noisy conditions and average execution time. The results showcase auspicious performance on both primary and secondary datasets, achieving accuracy rates of 99.03% in noiseless conditions and 93.13% in noisy images, with an average execution time of 16.1 s. The proposed method holds significant potential in medical image analysis and could pave the way for future advancements in automated DR diagnosis and management. RESEARCH HIGHLIGHTS: A novel approach is proposed in the article, integrating a neural network energy-based model into the FCM algorithm to enhance the convergence criteria and the accuracy of automated DR screening tools. By leveraging the microscopic characteristics of retinal images, the proposed method significantly improves the accuracy of lesion segmentation, facilitating early detection and monitoring of DR. The evaluation of the method's performance includes primary datasets from reputable sources such as the Shiva Netralaya Centre, IDRiD, and DIARETDB1, demonstrating its effectiveness in comparison to other techniques (FCM, EFCM, FLICM, and M-FLICM) in terms of accuracy in both noiseless and noisy conditions. It achieves impressive accuracy rates of 99.03% in noiseless conditions and 93.13% in noisy images, with an average execution time of 16.1 s.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call