Abstract

To train a deep learning (DL) algorithm to perform fully automated semantic segmentation of multiple autofluorescence lesion types in Stargardt disease. Cross-sectional study with retrospective imaging data. The study included 193 images from 193 eyes of 97 patients with Stargardt disease. Fundus autofluorescence images obtained from patient visits between 2013 and 2020 were annotated with ground-truth labels. Model training and evaluation were performed using fivefold cross-validation. Dice similarity coefficients, intraclass correlation coefficients, and Bland-Altman analyses comparing algorithm-predicted and grader-labeled segmentations. The overall Dice similarity coefficient across all lesion classes was 0.78 (95% confidence interval [CI], 0.69-0.86). Dice coefficients were 0.90 (95% CI, 0.85-0.94) for areas of definitely decreased autofluorescence (DDAF), 0.55 (95% CI, 0.35-0.76) for areas of questionably decreased autofluorescence (QDAF), and 0.88 (95% CI, 0.73-1.00) for areas of abnormal background autofluorescence (ABAF). Intraclass correlation coefficients comparing the ground-truth and automated methods were 0.997 (95% CI, 0.996-0.998) for DDAF, 0.863 (95% CI, 0.823-0.895) for QDAF, and 0.974 (95% CI, 0.966-0.980) for ABAF. A DL algorithm performed accurate segmentation of autofluorescence lesions in Stargardt disease, demonstrating the feasibility of fully automated segmentation as an alternative to manual or semiautomated labeling methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call