Abstract

We propose a hybrid sequential prediction model called “Deep Sequence”, integrating radiomics-engineered imaging features, demographic, and visual factors, with a recursive neural network (RNN) model in the same platform to predict the risk of exudation within a future time-frame in non-exudative AMD eyes. The proposed model provides scores associated with risk of exudation in the short term (within 3 months) and long term (within 21 months), handling challenges related to variability of OCT scan characteristics and the size of the training cohort. We used a retrospective clinical trial dataset that includes 671 AMD fellow eyes with 13,954 observations before any signs of exudation for training and validation in a tenfold cross validation setting. Deep Sequence achieved high performance for the prediction of exudation within 3 months (0.96 ± 0.02 AUCROC) and within 21 months (0.97 ± 0.02 AUCROC) on cross-validation. Training the proposed model on this clinical trial dataset and testing it on an external real-world clinical dataset showed high performance for the prediction within 3-months (0.82 AUCROC) but a clear decrease in performance for the prediction within 21-months (0.68 AUCROC). While performance differences at longer time intervals may be derived from dataset differences, we believe that the high performance and generalizability achieved in short-term predictions may have a high clinical impact allowing for optimal patient follow-up, adding the possibility of more frequent, detailed screening and tailored treatments for those patients with imminent risk of exudation.

Highlights

  • Age-related macular degeneration (AMD) is the leading cause of visual loss in developed countries with an aging ­population[1]

  • We used the retrospective de-identified HARBOR clinical trial dataset (ClinicalTrials.gov identifier: NCT00891735) for training and validation of the proposed model in a cross-validation scenario, and for our external test set, we curated another independent set of Optical coherence tomography (OCT) scans from the Bascom Palmer Eye Institute (BPEI, Miami, Florida) for an independent evaluation of the model trained with the HARBOR data

  • We implemented a hybrid sequential prediction model, Deep sequence, that incorporates longitudinal OCT imaging radiomics and demographic information in an recursive neural network (RNN) model to predict the probability of a future exudative event in eyes with early and intermediate AMD (AMD progression) within a given time interval

Read more

Summary

Introduction

Age-related macular degeneration (AMD) is the leading cause of visual loss in developed countries with an aging ­population[1]. While multiple studies using deep l­earning[7,8] have shown to be effective for classifying OCT images from normal versus AMD eyes by directly analyzing the image pixel data, two main studies based on extraction OCT image biomarkers have been published previously regarding the prediction of AMD ­progression[9,10] These two studies proposed an initial solution for the prediction of exudation events using traditional machine learning techniques with reported performances of 0.74 and 0.68 AUC. As a general rule of thumb, the size of a dataset should be at-least about 10 × its dimension , which is impractical for most of the longitudinal clinical prediction cases, given the limitation of data availability To overcome these challenges, we propose a hybrid modeling approach, which integrates radiomics (handcrafted size-based and shape-based features related to the relationship of image intensity between voxels), demographic and visual acuity data and deep learning (RNN) in the same platform. The performance of our proposed deep sequence model is compared against two baselines—traditional Random Forest model and sequential Cox Proportional H­ azards[16] model (Cox model) on all evaluations

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.