PurposeThis study aims to estimate the regional choroidal thickness from color fundus images from convolutional neural networks in different network structures and task learning models. Method1276 color fundus photos and their corresponding choroidal thickness values from healthy subjects were obtained from the Topcon DRI Triton optical coherence tomography machine. Initially, ten commonly used convolutional neural networks were deployed to identify the most accurate model, which was subsequently selected for further training. This selected model was then employed in combination with single-, multiple-, and auxiliary-task training models to predict the average and sub-region choroidal thickness in both ETDRS (Early Treatment Diabetic Retinopathy Study) grids and 100-grid subregions. The values of mean absolute error and coefficient of determination (R2) were involved to evaluate the models' performance. ResultsEfficientnet-b0 network outperformed other networks with the lowest mean absolute error value (25.61 μm) and highest R2 (0.7817) in average choroidal thickness. Incorporating diopter spherical, anterior chamber depth, and lens thickness as auxiliary tasks improved predicted accuracy (p-value = 6.39×10−44, 2.72×10−38, 1.15×10−36 respectively). For ETDRS regional choroidal thickness estimation, multi-task model achieved better results than single task model (lowest mean absolute error = 31.10 μm vs. 33.20 μm). The multi-task training also can simultaneously predict the choroidal thickness of 100 grids with a minimum mean absolute error of 33.86 μm. ConclusionsEfficientnet-b0, in combination with multi-task and auxiliary task models, achieve high accuracy in estimating average and regional macular choroidal thickness directly from color fundus photographs.
Read full abstract