Abstract

Automatic diabetic retinopathy diagnostic methods are proposed to facilitate the examination process and act as the physician’s helper. Most of the traditional convolution neural network (CNN) algorithms use only spatial features for image category recognition. This approach may not be optimal for the screening diabetic retinopathy because the retinal images have generally the same feature maps with minor differences in spatial domain. We propose a new high level image understanding using a modified CNN architecture mixed with modified support vector domain description (SVDD) as a classifier. This new innovative architecture uses two pathways extracting features of the retinal images in both spatial and spectral domains. The standard pre-trained AlexNet is chosen for modification to avoid the time complexity of the training algorithms. In spite the advantages of the modified AlexNet with two pathways configuration and standard SVDD classification, the different SVDD kernel functions affect the performance of the proposed algorithm. By using the appropriate transformed data into two or three dimensional feature spaces, the proposed SVDD can obtain more flexible and more accurate image descriptions. Also, we compared the performance of our approach with that of the commonly used as classification methods such as K-Means, subtractive and FCM clustering. Our proposed architecture achieves more than 98% precision and sensitivity for two class classification.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.