Abstract

Prediction of cognitive ability latent factors such as general intelligence from neuroimaging has elucidated questions pertaining to their neural origins. However, predicting general intelligence from functional connectivity limit hypotheses to that specific domain, being agnostic to time‐distributed features and dynamics. We used an ensemble of recurrent neural networks to circumvent this limitation, bypassing feature extraction, to predict general intelligence from resting‐state functional magnetic resonance imaging regional signals of a large sample (n = 873) of Human Connectome Project adult subjects. Ablating common resting‐state networks (RSNs) and measuring degradation in performance, we show that model reliance can be mostly explained by network size. Using our approach based on the temporal variance of saliencies, that is, gradients of outputs with regards to inputs, we identify a candidate set of networks that more reliably affect performance in the prediction of general intelligence than similarly sized RSNs. Our approach allows us to further test the effect of local alterations on data and the expected changes in derived metrics such as functional connectivity and instantaneous innovations.

Highlights

  • Intelligence is comprised of a number of distinct mental abilities

  • We aim to demonstrate that learning from lower level data, that is, timeseries instead of resting-state functional connectivity (RSFC), can bring further insights into the question of the neuronal bases on intelligence

  • This type of data requires specialized models, and we opt to use an ensemble of recurrent neural networks (RNNs) for this task

Read more

Summary

| INTRODUCTION

Intelligence is comprised of a number of distinct mental abilities. According to Colom, Karama, Jung, and Haier (2010), “Reasoning, problem solving, and learning are crucial facets of human intelligence.” By means of factor analysis, a single factor was found to explain most of the empyrical positive correlation between tests (Spearman, 1904; Thurstone, 1940). Deep learning models are able to explore existing nonlinearities in neuroimaging data and automatically extract informative features (Abrol et al, 2021) This allows deep models to often surpass traditional machine learning models in performance. We aim to demonstrate that learning from lower level data, that is, timeseries instead of RSFC, can bring further insights into the question of the neuronal bases on intelligence. This type of data requires specialized models, and we opt to use an ensemble of recurrent neural networks (RNNs) for this task.

| METHODS
| RESULTS
Findings
| DISCUSSION AND CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call