Diabetic retinopathy screening using machine learning: a systematic review
Diabetic retinopathy (DR) stands as a leading cause of global blindness. Early identification and prompt treatment are crucial in preventing vision impairment caused by diabetic retinopathy (DR). Manual screening of retinal fundus images is challenging and time-consuming. Additionally, there is a significant gap between the number of DR patients and the number of medical experts. Integrating machine learning (ML) and deep learning (DL) techniques is becoming a viable alternative to traditional DR screening techniques. However, the absence of a retinal dataset with standardized quality, the complexity of DL models, and the need for high computational resources are challenges. Therefore, in this study, we studied and analyzed the research landscape in integrating ML techniques in DR screening. In this regard, our work contributes significantly in several aspects. Initially, we identify and characterize images of the retinal fundus that are readily available. Then, we discuss commonly used preprocessing techniques in DR screening. In addition, we analyze the progress of ML techniques in DR screening. Lastly, we discussed existing challenges and showed future directions.Supplementary InformationThe online version contains supplementary material available at 10.1186/s42490-025-00098-0.
- Discussion
8
- 10.1016/j.lanwpc.2022.100476
- May 8, 2022
- The Lancet Regional Health - Western Pacific
Digital health in medicine: Important considerations in evaluating health economic analysis
- Book Chapter
2
- 10.4018/978-1-6684-5673-6.ch003
- Oct 21, 2022
Machine learning (ML) and deep learning (DL) techniques play a significant role in diabetic retinopathy (DR) detection via grading the severity levels or segmenting the retinal lesions. High sugar levels in the blood due to diabetes causes DR, a leading cause of blindness. Manual detection or grading of the DR requires ophthalmologists' expertise and consumes time prone to human errors. Therefore, using fundus images, the ML and DL algorithms help automatic DR detection. The fundus imaging analysis helps the early DR detection, controlling, and treatment evaluation of DR conditions. Knowing the fundus image analysis requires a strong knowledge of the system and ML and DL functionalities in computer vision. DL in fundus imaging is a rapidly expanding research area. This chapter presents the fundus images, DR, and its severity levels. Also, this chapter explains the performance analysis of the various ML DL-based DR detection techniques. Finally, the role of ML and DL techniques in DR detection or severity grading is discussed.
- Book Chapter
- 10.1201/9781003246688-8
- Aug 31, 2022
Diabetic retinopathy (DR) is a consequence of diabetes mellitus that can result in total vision loss if left untreated. The key to preventing future DR problems is early identification and treatment. Because manual assessment of pathological alterations in retina images is time consuming and expensive, computer-aided diagnosis is a very effective way to assist ophthalmologists. For the identification, segmentation, and classification of DR stages and lesions in fundus pictures, Machine Learning and Deep Learning techniques have recently supplanted traditional rule-based approaches. In this research, we give a comparison of the many state-of-the-art pre-processing strategies that have been used in deep learning-based DR classification tasks in recent times. Using a baseline deep learning model (ResNet-50) and two publically accessible retinal datasets (EyePACS and APTOS), the performance of various preprocessing procedures and their combinations is investigated for diverse tasks such as referral DR, DR screening, and five-class DR grading. In the DR-screening, referable-DR, and DR gradation tasks, the preprocessing strategy consisting of region of interest extraction followed by contrast and edge enhancement using Graham's method and z-score intensity normalization achieved the highest accuracy of 98.5%, 96.5%, and 90.59%, respectively, as well as achieved the best quadratic weighted kappa score of 0.945 in DR grading task. In the DR grading and DR screening tasks, it had the best AUC-ROC of 0.98 and 0.9981, respectively. The results show that the preprocessing pipeline consisted of ROI extraction, followed by edge and contrast enhancement using Graham's method and then z-score intensity normalization outperforms all other preprocessing pipelines and appeared to be the most effective preprocessing strategy in helping the baseline CNN model to extract meaningful deep features.
- Research Article
- 10.11591/ijict.v14i2.pp516-528
- Aug 1, 2025
- International Journal of Informatics and Communication Technology (IJ-ICT)
<p class="abstract">Diabetic retinopathy (DR) is a progressive and sight-threatening complication of diabetes mellitus, characterized by damage to the blood vessels in the retina. Early detection of DR is vital for timely intervention and effective management to prevent irreversible vision loss. This paper provides a comprehensive review of recent advancements in integrating machine learning (ML) and deep learning (DL) techniques for diagnosing DR, aiming to assist ophthalmologists in their manual diagnostic process. The paper presents a comprehensive definition of DR, elucidating the underlying pathological processes, clinical signs, and the various stages of DR classification, ranging from mild non-proliferative to severe proliferative DR. Integrating ML and DL in DR diagnosis has developed the field by offering automated and efficient methods and techniques to analyze retinal images. With high sensitivity and specificity, these techniques demonstrate their efficacy in accurately identifying DR-related lesions, such as microaneurysms, exudates, and hemorrhages. Furthermore, the paper examines diverse datasets employed in training and evaluating ML and DL models for DR diagnosis. These datasets range from publicly available repositories to specialized datasets curated by medical institutions. The role of large-scale and diverse datasets in enhancing model robustness and generalizability is emphasized.</p>
- Research Article
1
- 10.1002/btpr.70013
- Feb 19, 2025
- Biotechnology progress
Machine learning (ML) techniques have emerged as an important tool improving the capabilities of online process monitoring and control in cell culture process for biopharmaceutical manufacturing. A variety of advanced ML algorithms have been evaluated in this study for cell growth monitoring using spectroscopic tools, including Raman and capacitance spectroscopies. While viable cell density can be monitored real-time in the cell culture process, online monitoring of cell viability has not been well established. A thorough comparison between the advanced ML techniques and traditional linear regression method (e.g., Partial Least Square regression) reveals a significant improvement in accuracy with the leading ML algorithms (e.g., 31.7% with Random Forest regressor), addressing the unmet need of continuous monitoring viability in a real time fashion. Both Raman and capacitance spectroscopies have demonstrated success in viability monitoring, with Raman exhibiting superior accuracy compared to capacitance. In addition, the developed methods have shown better accuracy in a relatively higher viability range (>90%), suggesting a great potential for early fault detection during cell culture manufacturing. Further study using ML techniques for VCD monitoring also showed an increased accuracy (27.3% with Raman spectroscopy) compared to traditional linear modeling. The successful integration of ML techniques not only amplifies the potential of process monitoring but also makes possible the development of advanced process control strategies for optimized operations and maximized efficiency.
- Research Article
1
- 10.1002/ima.22905
- May 11, 2023
- International Journal of Imaging Systems and Technology
COVID‐19 has affected more than 760 million people all over the world, as per the latest record of the WHO. The rapid proliferation of COVID‐19 patients not only created a health emergency but also led to an economic crisis. An early and accurate diagnosis of COVID‐19 can help in combating this deadly virus. In line with this, researchers have proposed several machine learning (ML) and deep learning (DL) techniques for detecting COVID‐19 since 2020. This article presents currently available manual diagnosis methods along with their limitations. It also provides an extensive survey of ML and DL techniques that can support medical professionals in the precise diagnosis of COVID‐19. ML methods, namely K‐nearest neighbor, support vector machine (SVM), artificial neural network, decision tree, naive bayes, and DL methods, viz. deep neural network, convolutional neural network (CNN), region‐based convolutional neural network, and long short‐term memories, are explored. It also provides details of the latest COVID‐19 open‐source datasets, consisting of x‐ray and computed tomography scan images. A comparative analysis of ML and DL techniques developed for COVID‐19 detection in terms of methodology, datasets, sample size, type of classification, performance, and limitations is also done. It has been found that SVM is the most frequently used ML technique, while CNN is the most commonly used DL technique for COVID‐19 detection. The challenges of an existing dataset have been identified, including size and quality of datasets, lack of labeled datasets, severity level, data imbalance, and privacy concerns. It is recommended that there is a need to establish a benchmark dataset that overcomes these challenges to enhance the effectiveness of ML and DL techniques. Further, hurdles in implementing ML and DL techniques in real‐time clinical settings have also been highlighted. In addition, the motivation noticed from the existing methods has been considered for extending the research with an optimized DL model, which attained improved performance using statistical and deep features. The optimized deep model performs better than 90% based on efficient features and proper classifier tuning.
- Research Article
41
- 10.1145/3552512
- Jan 15, 2024
- ACM Transactions on Asian and Low-Resource Language Information Processing
Epilepsy is one of the significant neurological disorders affecting nearly 65 million people worldwide. The repeated seizure is characterized as epilepsy. Different algorithms were proposed for efficient seizure detection using intracranial and surface EEG signals. In the last decade, various machine learning techniques based on seizure detection approaches were proposed. This paper discusses different machine learning and deep learning techniques for seizure detection using intracranial and surface EEG signals. A wide range of machine learning techniques such as support vector machine (SVM) classifiers, artificial neural network (ANN) classifier, and deep learning techniques such as a convolutional neural network (CNN) classifier, and long-short term memory (LSTM) network for seizure detection are compared in this paper. The effectiveness of time-domain features, frequency domain features, and time-frequency domain features are discussed along with different machine learning techniques. Along with EEG, other physiological signals such as electrocardiogram are used to enhance seizure detection accuracy which are discussed in this paper. In recent years deep learning techniques based on seizure detection have found good classification accuracy. In this paper, an LSTM deep learning-network-based approach is implemented for seizure detection and compared with state-of-the-art methods. The LSTM based approach achieved 96.5% accuracy in seizure-nonseizure EEG signal classification. Apart from analyzing the physiological signals, sentiment analysis also has potential to detect seizures. Impact Statement- This review paper gives a summary of different research work related to epileptic seizure detection using machine learning and deep learning techniques. Manual seizure detection is time consuming and requires expertise. So the artificial intelligence techniques such as machine learning and deep learning techniques are used for automatic seizure detection. Different physiological signals are used for seizure detection. Different researchers are working on developing automatic seizure detection using EEG, ECG, accelerometer, and sentiment analysis. There is a need for a review paper that can discuss previous techniques and give further research direction. We have discussed different techniques for seizure detection with an accuracy comparison table. It can help the researcher to get an overview of both surface and intracranial EEG-based seizure detection approaches. The new researcher can easily compare different models and decide the model they want to start working on. A deep learning model is discussed to give a practical application of seizure detection. Sentiment analysis is another dimension of seizure detection and summarizing it will give a new prospective to the reader.
- Conference Article
- 10.1115/imece2023-114273
- Oct 29, 2023
Transportation systems play a pivotal role in modern society, but they are not without inherent risks and inefficiencies. This paper explores the integration of Probabilistic Risk Assessment (PRA) and Machine Learning (ML) techniques to enhance safety and cost optimization in hazardous materials (HAZMAT) transportation. Traditional PRA methods, while robust, are limited by the quality and quantity of data available for analysis. ML techniques can address these limitations by analyzing large datasets, identifying patterns, and making accurate predictions. The integration of ML techniques into PRA can enhance data analysis, prediction capabilities, routing decisions, resource allocation, and decision-making processes in HAZMAT transportation. This paper presents a comprehensive literature review on PRA and ML within the transportation industry, discusses the potential benefits of integrating these approaches, examines the challenges associated with transportation accident data, and suggests areas for further research and improvements in HAZMAT transportation safety analysis.
- Book Chapter
6
- 10.1007/978-981-16-5207-3_51
- Nov 24, 2021
Diabetic retinopathy (DR) is a complication of diabetes mellitus, which if left untreated may lead to complete vision loss. Early diagnosis and treatment are the key to prevent further complications of DR. Computer-aided diagnosis is a very effective method to support ophthalmologists, as manual inspection of pathological changes in retina images is time consuming and expensive. In recent times, machine learning and deep learning techniques have subsided conventional rule-based approaches for detection, segmentation, and classification of DR stages and lesions in fundus images. In this paper, we present a comparative study of the different state-of-the-art preprocessing methods that are used in deep learning-based DR classification tasks. Performances are analyzed on a publicly available retinal dataset (APTOS) for referral DR, DR screening, and five-class DR grading tasks, using a benchmark deep learning model (ResNet-50). It has been found that the preprocessing strategy composed of Graham’s contrast and edge enhancement, and noise reduction method followed by z-score normalization has outperformed other preprocessing pipelines by achieving highest accuracy of 89.31%, 79.31%, and 79.82% in DR screening, referable-DR, and DR gradation tasks, respectively. This combination has proved to be the most effective preprocessing strategy achieving the best AUC-ROC of 0.7981 and 0.7982 in DR screening and referral DR tasks, respectively.KeywordsDiabetic retinopathyPreprocessingDR severity gradingDR screeningReferable DRMachine learningDeep learningCNN
- Book Chapter
2
- 10.1007/978-3-030-74761-9_17
- Jul 28, 2021
As per the World Health Organization (WHO), Coronaviruses represents a huge virus family that creates diseases in humans/animals. The newly discovered coronavirus is known as Covid-19 (Cov-19). In December 2019, this virus broke out in Wuhan, China causing massive havoc worldwide. The design, development, theory and application of standards related to computation form the Computational Intelligence (CI) methods. Conventionally, the 3 key components of CI are the Artificial Neural Networks (ANN), Fuzzy System (FS), & Computation related to Evolution (EC). Lately, techniques like chaotic systems, support vector machines (SVM), etc. have been included into the CI techniques. Machine Learning (ML) enables systems to automatically learn without being programmed explicitly. Deep Learning (DL) represents a family of ML techniques based on ANN. A great potential has been observed while applying CI, ML, and DL techniques for predicting Cov-19. In this regard, the key objective of this chapter is to present an extensive review to the readers on how CI, ML, and DL techniques can be utilized to effectively predict Cov-19. The chapter deals with the review of the different CI, ML, and DL techniques such as ANN, FS, and EC that have been applied for Cov-19 prediction. The application and suitability of CI, ML, and DL techniques for screening and treating the patients, tracing the contacts along with Cov-19 forecasting is discussed in detail. A discussion of why certain CI, ML, and DL techniques are useful for the Cov-19 prediction is also presented.
- Research Article
12
- 10.3389/fmed.2022.1050436
- Nov 8, 2022
- Frontiers in Medicine
Diabetic retinopathy (DR) is a late microvascular complication of Diabetes Mellitus (DM) that could lead to permanent blindness in patients, without early detection. Although adequate management of DM via regular eye examination can preserve vision in in 98% of the DR cases, DR screening and diagnoses based on clinical lesion features devised by expert clinicians; are costly, time-consuming and not sufficiently accurate. This raises the requirements for Artificial Intelligent (AI) systems which can accurately detect DR automatically and thus preventing DR before affecting vision. Hence, such systems can help clinician experts in certain cases and aid ophthalmologists in rapid diagnoses. To address such requirements, several approaches have been proposed in the literature that use Machine Learning (ML) and Deep Learning (DL) techniques to develop such systems. However, these approaches ignore the highly valuable clinical lesion features that could contribute significantly to the accurate detection of DR. Therefore, in this study we introduce a framework called DR-detector that employs the Extreme Gradient Boosting (XGBoost) ML model trained via the combination of the features extracted by the pretrained convolutional neural networks commonly known as transfer learning (TL) models and the clinical retinal lesion features for accurate detection of DR. The retinal lesion features are extracted via image segmentation technique using the UNET DL model and captures exudates (EXs), microaneurysms (MAs), and hemorrhages (HEMs) that are relevant lesions for DR detection. The feature combination approach implemented in DR-detector has been applied to two common TL models in the literature namely VGG-16 and ResNet-50. We trained the DR-detector model using a training dataset comprising of 1,840 color fundus images collected from e-ophtha, retinal lesions and APTOS 2019 Kaggle datasets of which 920 images are healthy. To validate the DR-detector model, we test the model on external dataset that consists of 81 healthy images collected from High-Resolution Fundus (HRF) dataset and MESSIDOR-2 datasets and 81 images with DR signs collected from Indian Diabetic Retinopathy Image Dataset (IDRID) dataset annotated for DR by expert. The experimental results show that the DR-detector model achieves a testing accuracy of 100% in detecting DR after training it with the combination of ResNet-50 and lesion features and 99.38% accuracy after training it with the combination of VGG-16 and lesion features. More importantly, the results also show a higher contribution of specific lesion features toward the performance of the DR-detector model. For instance, using only the hemorrhages feature to train the model, our model achieves an accuracy of 99.38 in detecting DR, which is higher than the accuracy when training the model with the combination of all lesion features (89%) and equal to the accuracy when training the model with the combination of all lesions and VGG-16 features together. This highlights the possibility of using only the clinical features, such as lesions that are clinically interpretable, to build the next generation of robust artificial intelligence (AI) systems with great clinical interpretability for DR detection. The code of the DR-detector framework is available on GitHub at https://github.com/Janga-Lab/DR-detector and can be readily employed for detecting DR from retinal image datasets.
- Research Article
13
- 10.37648/ijrst.v13i01.008
- Jan 1, 2023
- International Journal of Research in Science and Technology
The most dangerous disorders include melanoma. Yet, a precise diagnosis of skin cancer is difficult. Recent research has shown that a variety of activities can be performed better using deep learning and machine learning techniques. For skin conditions, these algorithms are highly useful. In this article, we examine various deep learning and machine learning techniques and how they could be applied to the detection of melanoma. This paper provides a number of publicly downloadable datasets, information on common melanoma, instructions for getting dermatology pictures, and more. Once machine learning and deep learning concepts have been introduced, our attention shifts to analysing common machine learning and deep learning architectures as well as popular frameworks for putting machine and deep learning algorithms into practice. Metrics for performance evaluation are then offered. In this section, we will cover the research on machine learning and deep learning and how they can be applied to the detection of melanoma skin illnesses. We also go over potential research avenues and the difficulties in the field. The main objective of this work is to discuss modern machine learning and deep learning techniques for melanoma diagnosis.
- Discussion
4
- 10.34067/kid.0003752022
- Sep 29, 2022
- Kidney360
Seeing the Light: Improving Diabetic Retinopathy Outcomes by Bringing Screening to the Dialysis Clinic.
- Conference Article
1
- 10.5753/sbsi.2025.246512
- May 19, 2025
Context: Information Systems (IS) have grown exponentially, significantly influencing professional and personal environments. Both scenarios require a distinguished User Experience (UX), which generates positive feelings such as loyalty, learning, and satisfaction from end users. Consequently, tools, software, and applications that integrate Machine Learning (ML) techniques with UX are necessary for enhancing the quality of IS and increasing the productivity of UX specialists. Problem: There is a continued need for more experimental evidence regarding the development, employability/ applicability, evaluation, and evolution of current technologies that automate manual tasks performed by experts. Specifically, such technologies aim to reduce workload, eliminate evaluation biases, and identify patterns that might go unnoticed during assessments. Method: This work aims to summarize and characterize, through a Systematic Mapping Study (SMS), the tools that employ ML techniques to assist in the UX evaluation process. To help us, we defined seven sub-questions that will be addressed based on the data collected from the selected studies. Contributions and Impact: Based on the selected studies, we analyzed and characterized the assessment tools to provide a comprehensive understanding for both the academic and professional communities. This work presents the current state of tools that integrate ML techniques for UX evaluation, offering valuable insights into their effectiveness and application within the IS domain.
- Research Article
185
- 10.1016/j.measen.2022.100441
- Sep 5, 2022
- Measurement: Sensors
A comprehensive review on detection of plant disease using machine learning and deep learning approaches
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.