Abstract

Computational neuroimaging methods aim to predict brain responses (measured e.g. with functional magnetic resonance imaging [fMRI]) on the basis of stimulus features obtained through computational models. The accuracy of such prediction is used as an indicator of how well the model describes the computations underlying the brain function that is being considered. However, the prediction accuracy is bounded by the proportion of the variance of the brain response which is related to the measurement noise and not to the stimuli (or cognitive functions). This bound to the performance of a computational model has been referred to as the noise ceiling. In previous fMRI applications two methods have been proposed to estimate the noise ceiling based on either a split-half procedure or Monte Carlo simulations. These methods make different assumptions over the nature of the effects underlying the data, and, importantly, their relation has not been clarified yet. Here, we derive an analytical form for the noise ceiling that does not require computationally expensive simulations or a splitting procedure that reduce the amount of data. The validity of this analytical definition is proved in simulations, we show that the analytical solution results in the same estimate of the noise ceiling as the Monte Carlo method. Considering different simulated noise structure, we evaluate different estimators of the variance of the responses and their impact on the estimation of the noise ceiling. We furthermore evaluate the interplay between regularization (often used to estimate model fits to the data when the number of computational features in the model is large) and model complexity on the performance with respect to the noise ceiling. Our results indicate that when considering the variance of the responses across runs, computing the noise ceiling analytically results in similar estimates as the split half estimator and approaches the true noise ceiling under a variety of simulated noise scenarios. Finally, the methods are tested on real fMRI data acquired at 7 Tesla.

Highlights

  • Computational modelling approaches applied to functional magnetic resonance imaging measurements aim to explain and predict the brain responses by expressing them as a function of model features that describe the sensory stimuli [1,2,3,4,5]

  • Encoding computational models in brain responses measured with functional magnetic resonance imaging (fMRI) allows testing the algorithmic representations carried out by the neural population within voxels

  • We evaluate existing approaches to estimate the best possible accuracy that any computational model can achieve conditioned to the amount of measurement noise that is present in the experimental data

Read more

Summary

Introduction

Computational modelling approaches applied to functional magnetic resonance imaging (fMRI) measurements aim to explain and predict the brain responses by expressing them as a function of model features that describe the sensory (or cognitive) stimuli [1,2,3,4,5]. The prediction accuracy is affected by inaccuracies in the definition of the algorithm (i.e. mismodelling) and by other sources of variance in the brain responses that are not expressly modelled (e.g. attention and adaptation) and, most importantly, by physiological (e.g. respiration) and measurement noise. Tested models of sensory (or cognitive) stimuli do not account for the variability in the response between repetitions of the same stimulus which imposes a bound to the ability to encode computational models in fMRI responses This bound can be interpreted as the performance of the computational model underlying the generation of the responses (i.e. the true underlying model) conditional to the noise (experimental, physiological or other) that is present in the test data (under the assumption of infinite training data). Reporting the test-data-noise ceiling allows assessing the quality of the predictions obtained when using computational modelling approaches relative to the quality of the data, and comparing modelling efforts on different datasets across labs

Objectives
Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.