Abstract

ObjectiveWe aimed to develop a deep learning system capable of identifying subjects with cognitive impairment quickly and easily based on multimodal ocular images. DesignCross-sectional study SubjectsParticipants of Beijing Eye Study 2011 and patients attending Beijing Tongren Eye Center and Beijing Tongren Hospital Physical Examination Center. MethodsWe trained and validated a deep learning algorithm to assess cognitive impairment using retrospectively collected data from the Beijing Eye Study 2011. Cognitive impairment was defined as a Mini–Mental State Examination (MMSE) score <24. Based on fundus photographs and optical coherence tomography (OCT) images, we developed five models based on the following sets of images: macula-centered fundus photographs, optic disc-centered fundus photographs, fundus photographs of both fields, optical coherence tomography (OCT) images, and fundus photographs of both fields with OCT (multi-modal). The performance of the models was evaluated and compared in an external validation dataset, which was collected from patients attending Beijing Tongren Eye Center and Beijing Tongren Hospital Physical Examination Center. Main Outcome MeasuresArea under the curve (AUC). ResultsA total of 9,424 retinal photographs and 4,712 OCT images were used to develop the model. The external validation sets from each center included 1,180 fundus photographs and 590 OCT images. Model comparison revealed that the multi-modal performed best, achieving an AUC of 0.820 in the internal validation set, 0.786 in external validation set 1 and 0.784 in external validation set 2. We evaluated the performance of the multi-model in different sexes and different age groups; there were no significant differences. The heatmap analysis showed that signals around the optic disc in fundus photographs and the retina and choroid around the macular and optic disc regions in OCT images were used by the multi-modal to identify participants with cognitive impairment. ConclusionsFundus photographs and OCT can provide valuable information on cognitive function. Multi-modal models provide richer information compared to single-mode models. Deep learning algorithms based on multimodal retinal images may be capable of screening cognitive impairment. This technique has potential value for broader implementation in community-based screening or clinic settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call