Accurate segmentation of retinal layers in optical coherence tomography (OCT) images is critical for assessing diseases that affect the optic nerve, but existing automated algorithms often fail when pathology causes irregular layer topology, such as extreme thinning of the ganglion cell-inner plexiform layer (GCIPL). Deep LOGISMOS, a hybrid approach that combines the strengths of deep learning and 3D graph search to overcome their limitations, was developed to improve the accuracy, robustness and generalizability of retinal layer segmentation. The method was trained on 124 OCT volumes from both eyes of 31 non-arteritic anterior ischemic optic neuropathy (NAION) patients and tested on three cross-sectional datasets with available reference tracings: Test-NAION (40 volumes from both eyes of 20 NAION subjects), Test-G (29 volumes from 29 glaucoma subjects/eyes), and Test-JHU (35 volumes from 21 multiple sclerosis and 14 control subjects/eyes) and one longitudinal dataset without reference tracings: Test-G-L (155 volumes from 15 glaucoma patients/eyes). In the three test datasets with reference tracings (Test-NAION, Test-G, and Test-JHU), Deep LOGISMOS achieved very high Dice similarity coefficients (%) on GCIPL: 89.97±3.59, 90.63±2.56, and 94.06±1.76, respectively. In the same context, Deep LOGISMOS outperformed the Iowa reference algorithms by improving the Dice score by 17.5, 5.4, and 7.5, and also surpassed the deep learning framework nnU-Net with improvements of 4.4, 3.7, and 1.0. For the 15 severe glaucoma eyes with marked GCIPL thinning (Test-G-L), it demonstrated reliable regional GCIPL thickness measurement over five years. The proposed Deep LOGISMOS approach has potential to enhance precise quantification of retinal structures, aiding diagnosis and treatment management of optic nerve diseases.
Read full abstract