Abstract

Automated detection of eye diseases using artificial intelligence techniques on optical coherence tomography (OCT) images is widely researched in the field of ophthalmology. Such detections are usually performed with the aid of computers. Using high-level simulations, this study investigates and evaluates three automated age-related macular degeneration (AMD) detection flows in terms of computation time and detection accuracy for future hardware-accelerated designs of intelligent and portable OCT systems. In this study, a block-matching and 3-Dimension filter (BM3DF), a hybrid median filter (HMF), and an adaptive wiener filter (AWF) are used to denoise the OCT images. Support vector machine (SVM), AlexNet, GoogLeNet, and Inception-ResNet are employed for AMD detection. Moreover, Local binary patterns, linear configuration patterns, and transfer learning techniques are used to extract image features. Simulation results reveal that machine-learning-based automated AMD detection realizes a high detection accuracy of 95.91% accompanied by low computation time when using the HMF rather than the BM3DF. When considering deep-learning-based automated AMD detection, the combination of HMF and Inception-ResNet achieves the highest detection accuracy of 98.64% but is accompanied by a dramatic increase in computation time. However, only AlexNet achieves a detection accuracy of 96.40%, accompanied by low computation time. In this study, the tradeoffs between the computation time and detection accuracy have been revealed by comparing the denoising methods for the distinct automated AMD detections.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call