Abstract

In this technical note, we address an unresolved challenge in neuroimaging statistics: how to determine which of several datasets is the best for inferring neuronal responses. Comparisons of this kind are important for experimenters when choosing an imaging protocol—and for developers of new acquisition methods. However, the hypothesis that one dataset is better than another cannot be tested using conventional statistics (based on likelihood ratios), as these require the data to be the same under each hypothesis. Here we present Bayesian data comparison (BDC), a principled framework for evaluating the quality of functional imaging data, in terms of the precision with which neuronal connectivity parameters can be estimated and competing models can be disambiguated. For each of several candidate datasets, neuronal responses are modeled using Bayesian (probabilistic) forward models, such as General Linear Models (GLMs) or Dynamic Casual Models (DCMs). Next, the parameters from subject-specific models are summarized at the group level using a Bayesian GLM. A series of measures, which we introduce here, are then used to evaluate each dataset in terms of the precision of (group-level) parameter estimates and the ability of the data to distinguish similar models. To exemplify the approach, we compared four datasets that were acquired in a study evaluating multiband fMRI acquisition schemes, and we used simulations to establish the face validity of the comparison measures. To enable people to reproduce these analyses using their own data and experimental paradigms, we provide general-purpose Matlab code via the SPM software.

Highlights

  • Hypothesis testing involves comparing the evidence for different models or hypotheses, given some measured data

  • We followed the analysis pipeline described above to compare data acquired under four levels of multiband acceleration

  • As with any accelerated imaging technique, the multiband acquisition scheme is vulnerable to potential aliased signals being unfolded incorrectly. This is important since activation aliasing between DCM regions of interest could potentially lead to artificial correlations between regions (Todd et al, 2016). This analysis, detailed in the Supplementary Materials, confirmed that the aliased location of any given region of interest used in the DCM analysis did not overlap with any other region of interest

Read more

Summary

Introduction

Hypothesis testing involves comparing the evidence for different models or hypotheses, given some measured data. Likelihood ratios are ubiquitous in statistics, forming the basis of the F-test and the Bayes factor in classical and Bayesian statistics, respectively. They are the most powerful test for any given level of significance by the Neyman-Pearson lemma (Neyman and Pearson, 1933). The likelihood ratio test assumes that there is only one dataset y–and so cannot be used to compare different datasets. An unresolved problem, especially pertinent to neuroimaging, is how to test the hypothesis that one dataset is better than another for making inferences

Objectives
Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.