Abstract
BackgroundThe retina provides valuable insights into vascular health within the body. Prior studies have demonstrated the potential of deep learning in predicting Cardiovascular Disease (CVD) risk using color fundus photographs. PurposeTo use fundus images to more consistently predict the World Health Organization (WHO) CVD score and to address the problem of year-to-year fluctuations associated with the traditional CVD risk score calculation. MethodsUtilizing 55,540 fundus images from 3,765 participants with 6-year follow-up data, we designed a DL model named Reti-WHO based on the Swin Transformer to predict CVD risk regression scores. Multiple regression and classification metrics such as coefficient of determination (R2-score), Mean Absolute Error (MAE), sensitivity and specificity were employed to assess the accuracy of the Reti-WHO. Significance differences between WHO CVD scores and Reti-WHO scores were also assessed. Vessel measurements were employed to interpret the model and evaluate the association between Reti-WHO and vascular conditions. ResultsThe deep learning model achieved good classification and regression metrics on the validation set, with an R2-score of 0.503, MAE of 1.58, sensitivity of 0.81, and specificity of 0.66. There was no statistically significant difference between WHO CVD scores and Reti-WHO scores (P value = 0.842). The model exhibited a stronger correlation with vascular measurements, including mean and variance of arc and chord in arteries and veins. Comparing box plots and Vyshyvanka plots depicting changes in patients' CVD over the years, the Reti-WHO calculated by our model demonstrated greater stability compared to non-deep learning-based WHO CVD risk calculations. ConclusionsOur Reti-WHO scores demonstrated enhanced stability compared to WHO CVD scores calculated solely from the patient's physical indicators, suggesting that the features learned from retinal fundus photographs serve as robust indicators of CVD risk. However, the model may still exhibit false negatives in high-risk predictions, requiring ongoing research for refinement. Future directions involve validating the model across diverse populations and exploring multi-image and multi-modal approaches to enhance prediction accuracy.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have