Abstract

PurposeOptic nerve damage is the principal feature of glaucoma and contributes to vision loss in many diseases. In animal models, nerve health has traditionally been assessed by human experts that grade damage qualitatively or manually quantify axons from sampling limited areas from histologic cross sections of nerve. Both approaches are prone to variability and are time consuming. First-generation automated approaches have begun to emerge, but all have significant shortcomings. Here, we seek improvements through use of deep-learning approaches for segmenting and quantifying axons from cross-sections of mouse optic nerve.MethodsTwo deep-learning approaches were developed and evaluated: (1) a traditional supervised approach using a fully convolutional network trained with only labeled data and (2) a semisupervised approach trained with both labeled and unlabeled data using a generative-adversarial-network framework.ResultsFrom comparisons with an independent test set of images with manually marked axon centers and boundaries, both deep-learning approaches outperformed an existing baseline automated approach and similarly to two independent experts. Performance of the semisupervised approach was superior and implemented into AxonDeep.ConclusionsAxonDeep performs automated quantification and segmentation of axons from healthy-appearing nerves and those with mild to moderate degrees of damage, similar to that of experts without the variability and constraints associated with manual performance.Translational RelevanceUse of deep learning for axon quantification provides rapid, objective, and higher throughput analysis of optic nerve that would otherwise not be possible.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.