Abstract

Developmental reading disability is a prevalent and often enduring problem with varied mechanisms that contribute to its phenotypic heterogeneity. This mechanistic and phenotypic variation, as well as relatively modest sample sizes, may have limited the development of accurate neuroimaging-based classifiers for reading disability, including because of the large feature space of neuroimaging datasets. An unsupervised learning model was used to reduce deformation-based data to a lower-dimensional manifold and then supervised learning models were used to classify these latent representations in a dataset of 96 reading disability cases and 96 controls (mean age: 9.86 ± 1.56 years). A combined unsupervised autoencoder and supervised convolutional neural network approach provided an effective classification of cases and controls (accuracy: 77%; precision: 0.75; recall: 0.78). Brain regions that contributed to this classification accuracy were identified by adding noise to the voxel-level image data, which showed that reading disability classification accuracy was most influenced by the superior temporal sulcus, dorsal cingulate, and lateral occipital cortex. Regions that were most important for the accurate classification of controls included the supramarginal gyrus, orbitofrontal, and medial occipital cortex. The contribution of these regions reflected individual differences in reading-related abilities, such as non-word decoding or verbal comprehension. Together, the results demonstrate an optimal deep learning solution for classification using neuroimaging data. In contrast with standard mass-univariate test results, results from the deep learning model also provided evidence for regions that may be specifically affected in reading disability cases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call