Segmentation of Hard Exudates And Hemorrhages from Diabetic Retinopathy Images Using Residual U-Net with Squeeze and Excite Blocks
Diabetic Retinopathy (DR) is a common eye disease worldwide and usually found in diabetes patients. Deep learning based algorithms have recently produced promising findings for the automatic detection and segmentation of DR lesions from fundus images. In this paper, we present an approach for segmenting DR using a Residual U-Net model. Here, we have incorporated Squeeze and Excite (SE) blocks into the Residual U-Net architecture to segment various DR diseases from fundus images. SE blocks are attention techniques that dynamically recalibrate the feature maps of a neural network. These enable the network to prioritize useful features while suppressing less important ones, hence boosting the model’s ability to discriminate. We intended to enhance the segmentation performance by enabling the model to better capture spatial and contextual information by integrating SE blocks into the Residual U-Net architecture. The developed model architecture was trained and tested on fundus image DR datasets. The results obtained by the developed model for hard exudates and hemorrhages segmentations were better as compared to U-Net and its variants.
- Conference Article
1
- 10.1109/iccae.2010.5451472
- Feb 1, 2010
We propose an algorithm for the detection of retinal landmarks (optic nerve head or optic disc, macula, and vasculature) based on optic cup location and anatomical structural details from diabetic retinopathy (DR) images of both left and right eye. Our algorithm uses color fundus images obtained from mydriatic camera. The algorithm proceeds through four main steps 1. Color image pre-processing- to enhance and remove noise from the image. 2. Detection of optic nerve head-The optic nerve head is located using a fact that the optic cup is the brightest region in the optic nerve head. At the same time exudates (DR lesion) which appear in same gray level as optic nerve head are suppressed since we only concentrate on optic cup and optic nerve head. So by calculating the mean value of the intensities of 50 ? 50 subimages (50 ? 50 is approximate area of optic cup) throughout the image and then selecting that 50 ? 50 sub image with the highest mean value, locates optic cup in the image. Using this, optic nerve head is located by increasing the area of interest around the optic cup. Since we detect optic cup first, which is embedded in optic nerve head there will not be any false detection of optic nerve head when size and shape of the exudates (DR lesion) are same as that of optic nerve head 3.Detection of macula-It is located at a distance of approximately twice the diameter of the optic nerve head just below the horizontal axis of the optic nerve head. 4. Detection of vasculature-We have used logical AND operation on two images, one being a thresholded image and another being an edge detected image. The thresholding is done on an adaptive histogram equalized image. Edge detection is done using canny edge detector. Proposed algorithm has been tested on both normal and DR images. Detected optic disc area is validated by comparing it with expert ophthalmologists' hand-drawn ground-truths. The quantitative performance is evaluated by calculating sensitivity, specificity and predictive value. Overall sensitivity (Se), specificity (Sp) and predictive value (PV) obtained in detecting optic nerve head from normal images and from abnormal images are 97.2%, 99.72%, and 88.75% and 93.93%, 99.72%, and 84.18% respectively.
- Conference Article
- 10.1109/scse53661.2021.9568361
- Sep 16, 2021
Detection, and classification of medical images have become a trending field of study during the last few decades. There is a considerable amount of vital challenges to be overcome. Ample work has been carried out to provide proper solutions for those key challenges. This study was carried out to extend one such medical image classification process to classify the stages of Diabetic Retinopathy (DR) images from colour fundus images. The study proposes a novel Convolutional Neural Network (CNN) architecture which is considered to be one of the most trending and efficient forms of classification of DR stages. Initially, the preprocessing techniques were employed to the DR fundus images with Green channel extraction and Contrast Limited Adaptive Histogram Equalization (CLAHE). The data augmentation strategy was utilised to increase training images from the DR images. Finally, Feature extraction and classification were carried out by using the proposed CNN architecture. It consists of a 14 layered CNN model, which continues three main classifications. In this proposed classification, the images were classified into a tree structure based binary classification as No_DR and DR at the beginning, and then the DR images were again classified into two classes, namely Pre_Intermediate and Post_Intermediate. Moreover, those two classes were again separately classified into Mild, Moderate, and Proliferate_DR, Severe, respectively. The Kaggle is one of the benchmark dataset repositories which was used in this study. The proposed model was able to achieve accuracies of 81 %, 96%, 84%, and 97% for the above-mentioned classifications, respectively.
- Research Article
5
- 10.1049/ipr2.12865
- Jul 3, 2023
- IET Image Processing
Accurate segmentation of hard exudates in early non‐proliferative diabetic retinopathy can assist physicians in taking appropriate treatment in a more targeted manner, in order to avoid more serious damage to vision caused by the deterioration of the disease in the later stages. Here, an Adaptive Learning Unet‐based adversarial network with Convolutional neural network and Transformer (CT‐ALUnet) is proposed for automatic segmentation of hard exudates, combining the excellent local modelling ability of Unet with the global attention mechanism of transformer. Firstly, multi‐scale features are extracted through a CNN dual‐branch encoder. Then, the information fusion of features at adjacent scale is realized and the fused features are selected adaptively to maintain the overall consistency of features by attention‐guided multi‐scale fusion blocks (AGMFB). After that, the high‐level encoded features are input to transformer blocks to extract global contexts. Finally, these features are fused layer‐by‐layer to achieve accurate segmentation of hard exudates. In addition, adversarial training is incorporated into the above segmentation model, which improves Dice scores and MIoU scores by 7.5% and 3%, respectively. Experiments demonstrate that CT‐ALUnet shows more reliable segmentation and stronger generalization ability than other SOTA methods, which lays a good foundation for computer‐assisted diagnosis and assessment of efficacy.
- Conference Article
7
- 10.1117/12.769858
- Mar 6, 2008
- Proceedings of SPIE, the International Society for Optical Engineering/Proceedings of SPIE
We proposed a novel approach to automatic classification of Diabetic Retinopathy (DR) images and retrieval of clinically-relevant DR images from a database. Given a query image, our approach first classifies the image into one of the three categories: microaneurysm (MA), neovascularization (NV) and normal, and then it retrieves DR images that are clinically-relevant to the query image from an archival image database. In the classification stage, the query DR images are classified by the Multi-class Multiple-Instance Learning (McMIL) approach, where images are viewed as bags, each of which contains a number of instances corresponding to non-overlapping blocks, and each block is characterized by low-level features including color, texture, histogram of edge directions, and shape. McMIL first learns a collection of instance prototypes for each class that maximizes the Diverse Density function using Expectation- Maximization algorithm. A nonlinear mapping is then defined using the instance prototypes and maps every bag to a point in a new multi-class bag feature space. Finally a multi-class Support Vector Machine is trained in the multi-class bag feature space. In the retrieval stage, we retrieve images from the archival database who bear the same label with the query image, and who are the top K nearest neighbors of the query image in terms of similarity in the multi-class bag feature space. The classification approach achieves high classification accuracy, and the retrieval of clinically-relevant images not only facilitates utilization of the vast amount of hidden diagnostic knowledge in the database, but also improves the efficiency and accuracy of DR lesion diagnosis and assessment.
- Research Article
24
- 10.1109/tbme.2021.3115552
- Apr 1, 2022
- IEEE Transactions on Biomedical Engineering
Hyper-reflective foci (HRF) refers to the spot-shaped, block-shaped areas with characteristics of high local contrast and high reflectivity, which is mostly observed in retinal optical coherence tomography (OCT) images of patients with fundus diseases. HRF mainly appears hard exudates (HE) and microglia (MG) clinically. Accurate segmentation of HE and MG is essential to alleviate the harm in retinal diseases. However, it is still a challenge to segment HE and MG simultaneously due to similar pathological features, various shapes and location distribution, blurred boundaries, and small morphology dimensions. To tackle these problems, in this paper, we propose a novel global information fusion and dual decoder collaboration-based network (GD-Net), which can segment HE and MG in OCT images jointly. Specifically, to suppress the interference of similar pathological features, a novel global information fusion (GIF) module is proposed, which can aggregate the global semantic information efficiently. To further improve the segmentation performance, we design a dual decoder collaborative workspace (DDCW) to comprehensively utilize the semantic correlation between HE and MG while enhancing the mutual influence on them by feedback alternately. To further optimize GD-Net, we explore a joint loss function which integrates pixel-level with image-level. The dataset of this study comes from patients diagnosed with diabetic macular edema at the department of ophthalmology, University Medical Center Groningen, The Netherlands. Experimental results show that our proposed method performs better than other state-of-the-art methods, which suggests the effectiveness of the proposed method and provides research ideas for medical applications.
- Research Article
88
- 10.1016/j.cmpb.2011.06.007
- Jul 14, 2011
- Computer Methods and Programs in Biomedicine
Simple methods for segmentation and measurement of diabetic retinopathy lesions in retinal fundus images
- Research Article
51
- 10.1016/j.neucom.2018.10.103
- Apr 26, 2019
- Neurocomputing
Bin loss for hard exudates segmentation in fundus images
- Conference Article
4
- 10.1145/2382936.2383030
- Oct 7, 2012
Diabetic retinopathy (DR) is a vision-threatening complication that affects people suffering from diabetes. Diagnosis of DR during early stages can significantly reduce the risk of severe vision loss. The process of DR severity grading is prone to human error and it also depends on the expertise of the ophthalmologist. As a result, many researchers have started exploring automated detection and evaluation of diabetic retinal lesions. Unfortunately, to date there is no automated system that can perform DR lesion detection with the accuracy that is comparable to a human expert. In this poster, we present a novel way of employing content-based image retrieval for providing a clinician with instant reference to archival and standardized DR images that are used for assisting the ophthalmologist with the diagnosis of a given DR image. The focus of the poster is on retrieving DR images with two significant DR clinical findings, namely, microaneurysm (MA) and neovascularization (NV). We propose a multi-class multiple-instance DR image retrieval framework that makes use of a modified color correlogram (CC) and statistics of steerable Gaussian filter (SGF) responses. Experiments using real DR images with comparisons to other prior-art methods demonstrate the improved performance of the proposed approach.
- Conference Article
46
- 10.1109/iembs.2007.4353456
- Aug 1, 2007
Diabetic Retinopathy (DR) is a common cause of visual impairment among people of working age in industrialized countries. Automatic recognition of DR lesions, like hard exudates (HEs), in fundus images can contribute to the diagnosis and screening of this disease. In this study, we extracted a set of features from image regions and selected the subset which best discriminates between HEs and the retinal background. The selected features were then used as inputs to a multilayer perceptron (MLP) classifier to obtain a final segmentation of HEs in the image. Our database was composed of 100 images with variable color, brightness, and quality. 50 of them were used to train the MLP classifier and the remaining 50 to assess the performance of the method. Using a lesion-based criterion, we achieved a mean sensitivity of 84.4% and a mean positive predictive value of 62.7%. With an image-based criterion, our approach reached a 100% mean sensitivity, 84.0% mean specificity and 92.0% mean accuracy.
- Research Article
13
- 10.32604/csse.2023.028703
- Jan 1, 2023
- Computer Systems Science and Engineering
A prevalent diabetic complication is Diabetic Retinopathy (DR), which can damage the retina’s veins, leading to a severe loss of vision. If treated in the early stage, it can help to prevent vision loss. But since its diagnosis takes time and there is a shortage of ophthalmologists, patients suffer vision loss even before diagnosis. Hence, early detection of DR is the necessity of the time. The primary purpose of the work is to apply the data fusion/feature fusion technique, which combines more than one relevant feature to predict diabetic retinopathy at an early stage with greater accuracy. Mechanized procedures for diabetic retinopathy analysis are fundamental in taking care of these issues. While profound learning for parallel characterization has accomplished high approval exactness’s, multi-stage order results are less noteworthy, especially during beginning phase sickness. Densely Connected Convolutional Networks are suggested to detect of Diabetic Retinopathy on retinal images. The presented model is trained on a Diabetic Retinopathy Dataset having 3,662 images given by APTOS. Experimental results suggest that the training accuracy of 93.51% 0.98 precision, 0.98 recall and 0.98 F1-score has been achieved through the best one out of the three models in the proposed work. The same model is tested on 550 images of the Kaggle 2015 dataset where the proposed model was able to detect No DR images with 96% accuracy, Mild DR images with 90% accuracy, Moderate DR images with 89% accuracy, Severe DR images with 87% accuracy and Proliferative DR images with 93% accuracy.
- Research Article
2
- 10.1049/2023/8820773
- Jan 1, 2023
- IET Circuits, Devices & Systems
Diabetic retinopathy (DR) is an ocular ailment that may lead to loss of vision and eventual blindness among individuals diagnosed with diabetes. The blood vessels of the retina, a layer of light‐sensitive tissue located at the posterior aspect of the ocular globe, are adversely impacted. The identification of DR entails the utilization of retinal fundus images. The detection of any form of abnormality in the eye through raw fundus images poses a significant challenge for medical practitioners. Hence, it is imperative to engage in the processing of fundus images. This paper delineates several image processing techniques for DR images, including but not limited to, manipulation of brightness levels, application of negative transformation, and utilization of threshold operations. It focuses on elucidating the enhancement techniques that pertain to DR images, which aim to optimize the visual quality of said images in order to facilitate more facile disease detection. The process of detecting edges within DR images is also executed by Sobel edge detection algorithm. In order to successfully execute the aforementioned algorithms, expedient and contemporaneous systems are favored to account for the intricacies of the image processing calculations. The exclusive utilization of software techniques in order to fulfill the prerequisites of advanced algorithms presents a significant challenge, owing to the multifarious processes that are involved in their computation, coupled with an exigent requirement for high processing speeds. The proposed model is utilized to articulate a proficient model for the design and execution of field programable gate array (FPGA)‐based image enhancement processes along with the Sobel edge detection algorithm upon DR images. Finally, a Internet Protocol chip is developed that can combine multiple image enhancement operations into a single framework with less complexity.
- Research Article
57
- 10.1016/j.cmpb.2018.02.011
- Feb 20, 2018
- Computer Methods and Programs in Biomedicine
Hard exudates segmentation based on learned initial seeds and iterative graph cut
- Research Article
7
- 10.1142/s0219519413500140
- Jan 10, 2013
- Journal of Mechanics in Medicine and Biology
In this work, we developed an approach based on mathematical morphology and the k-means clustering algorithm to detect hard exudates (HEs) in images taken by retinography from different diabetic patients. The presence of exudates within the macular region is a hallmark of diabetic macular edema and is detected by diagnostics with high sensitivity. In ophthalmologic images, the segmentation of HEs is essential to characterize the shape of the lesion for analysis. In this domain, several approaches have been employed for exudate extraction. Some authors have used only the mathematical morphology, but this approach does not provide very good detection of exudates. In this paper, we combined the k-means clustering algorithm and the mathematical morphology. This approach was tested on a set of 50 ophthalmologic images. The obtained results were compared with manual segmentation by an ophthalmologist.
- Research Article
3
- 10.17485/ijst/2016/v9i15/88171
- May 6, 2016
- Indian Journal of Science and Technology
Diabetic Retinopathy (DR) is the consequence of micro-vascular retinal changes triggered by diabetes which can cause vision loss if not treated in a timely manner. The major sign of Diabetic Retinopathy are the presence of Exudates. This paper demonstrates a complete framework for the detection of Hard Exudates in Retinopathy images. This paper presents laplacian kernel and it is induced into the kernel spatial FCM clustering algorithm for the segmentation of retinal fundus images. In general, FCM and KFCM algorithms very sensitive to noise and other imaging artefacts because it doesn’t have spatial information. To overcome this problem, we presented Laplacian kernel spatial FCM which incorporates spatial information into its objective function and the fuzzy membership function. The performance of our proposed algorithm evaluated on different Diabetic Retinopathy images. The presented methodology is assessed using statistical measures like Sensitivity and Specificity.
- Research Article
51
- 10.1109/access.2020.3023273
- Jan 1, 2020
- IEEE Access
Diabetic retinopathy (DR) is an eye abnormality caused by chronic diabetes that affected patients worldwide. Hard exudate is an important and observable sign of DR and can be used for early diagnosis. In this paper, an automatic hard exudates segmentation method is proposed in order to aid ophthalmologists to diagnose DR in the early stage. We utilized the SLIC superpixel algorithm to generate sample patches, thus overcoming the difficulty of the limited and imbalanced dataset. Furthermore, a U-net based network architecture with inception modules and residual connections is proposed to conduct end-to-end hard exudate segmentation, and focal loss is utilized as the loss function. Extensive experiments have been conducted on the IDRiD dataset to evaluate the performance of the proposed method. The reported sensitivity, specificity, and accuracy achieve 96.38%, 97.14%, and 97.95% respectively, which demonstrates the effectiveness and superiority of our method. The achieved segmentation results prove the potential of the method for clinical diagnosis.