Abstract

Retinal image quality assessment (RIQA) is essential to assure that images used for medical analysis are of sufficient quality for reliable diagnosis. A modified VGG16 network with transfer learning is introduced in order to classify retinal images into good or bad quality images. Both spatial and wavelet detail subbands are compared as inputs to the modified VGG16 network. Three public retinal image datasets captured with different imaging devices are used, both individually and collectively. Superior performance was attained by the modified VGG16 network, where accuracies in the range of 99–100% were achieved regardless of whether retinal images from the same or different sources were considered and whether the spatial or wavelet images were used. The implemented RIQA algorithm was also found to outperform other RIQA deep learning algorithms from literature by 1.5–10% and to achieve accuracies that are up to 32% higher than traditional RIQA methods for the same dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.