Abstract

PurposeFundus images are typically used as the sole training input for automated diabetic retinopathy (DR) classification. In this study, we considered several well-known DR risk factors and attempted to improve the accuracy of DR screening.MetphodsFusing nonimage data (e.g., age, gender, smoking status, International Classification of Disease code, and laboratory tests) with data from fundus images can enable an end-to-end deep learning architecture for DR screening. We propose a neural network that simultaneously trains heterogeneous data and increases the performance of DR classification in terms of sensitivity and specificity. In the current retrospective study, 13,410 fundus images and their corresponding nonimage data were collected from the Chung Shan Medical University Hospital in Taiwan. The images were classified as either nonreferable or referable for DR by a panel of ophthalmologists. Cross-validation was used for the training models and to evaluate the classification performance.ResultsThe proposed fusion model achieved 97.96% area under the curve with 96.84% sensitivity and 89.44% specificity for determining referable DR from multimodal data, and significantly outperformed the models that used image or nonimage information separately.ConclusionsThe fusion model with heterogeneous data has the potential to improve referable DR screening performance for earlier referral decisions.Translational RelevanceArtificial intelligence fused with heterogeneous data from electronic health records could provide earlier referral decisions from DR screening.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call