Abstract
Deep learning algorithms as tools for automated image classification have recently experienced rapid growth in imaging-dependent medical specialties, including ophthalmology. However, only a few algorithms tailored to specific health conditions have been able to achieve regulatory approval for autonomous diagnosis. There is now an international effort to establish optimized thresholds for algorithm performance benchmarking in a rapidly evolving artificial intelligence field. This review examines the largest deep learning studies in glaucoma, with special focus on identifying recurrent challenges and limitations within these studies which preclude widespread clinical deployment. We focus on the 3 most common input modalities when diagnosing glaucoma, namely, fundus photographs, spectral domain optical coherence tomography scans, and standard automated perimetry data. We then analyze 3 major challenges present in all studies: defining the algorithm output of glaucoma, determining reliable ground truth datasets, and compiling representative training datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Asia-Pacific journal of ophthalmology (Philadelphia, Pa.)
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.