Abstract
Age-related macular degeneration (AMD) is a leading cause of vision loss in the elderly, highlighting the need for early and accurate detection. In this study, we proposed DeepDrAMD, a hierarchical vision transformer-based deep learning model that integrates data augmentation techniques and SwinTransformer, to detect AMD and distinguish between different subtypes using color fundus photographs (CFPs). The DeepDrAMD was trained on the in-house WMUEH training set and achieved high performance in AMD detection with an AUC of 98.76% in the WMUEH testing set and 96.47% in the independent external Ichallenge-AMD cohort. Furthermore, the DeepDrAMD effectively classified dryAMD and wetAMD, achieving AUCs of 93.46% and 91.55%, respectively, in the WMUEH cohort and another independent external ODIR cohort. Notably, DeepDrAMD excelled at distinguishing between wetAMD subtypes, achieving an AUC of 99.36% in the WMUEH cohort. Comparative analysis revealed that the DeepDrAMD outperformed conventional deep-learning models and expert-level diagnosis. The cost-benefit analysis demonstrated that the DeepDrAMD offers substantial cost savings and efficiency improvements compared to manual reading approaches. Overall, the DeepDrAMD represents a significant advancement in AMD detection and differential diagnosis using CFPs, and has the potential to assist healthcare professionals in informed decision-making, early intervention, and treatment optimization.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.