Abstract

Recent advances in parallel imaging and simultaneous multi-slice techniques have permitted whole-brain fMRI acquisitions at sub-second sampling intervals, without significantly sacrificing the spatial coverage and resolution. Apart from probing brain function at finer temporal scales, faster sampling rates may potentially lead to enhanced functional sensitivity, owing possibly to both cleaner neural representations (due to less aliased physiological noise) and additional statistical benefits (due to more degrees of freedom for a fixed scan duration). Accompanying these intriguing aspects of fast acquisitions, however, confusion has also arisen regarding (1) how to preprocess/analyze these fast fMRI data, and (2) what exactly is the extent of benefits with fast acquisitions, i.e., how fast is fast enough for a specific research aim? The first question is motivated by the altered spectral distribution and noise characteristics at short sampling intervals, while the second question seeks to reconcile the complicated trade-offs between the functional contrast-to-noise ratio and the effective degrees of freedom. Although there have been recent efforts to empirically approach different aspects of these two questions, in this work we discuss, from a theoretical perspective accompanied by some illustrative, proof-of-concept experimental in vivo human fMRI data, a few considerations that are rarely mentioned, yet are important for both preprocessing and optimizing statistical inferences for studies that employ acquisitions with sub-second sampling intervals. Several summary recommendations include concerns regarding advisability of relying on low-pass filtering to de-noise physiological contributions, employment of statistical models with sufficient complexity to account for the substantially increased serial correlation, and cautions regarding using rapid sampling to enhance functional sensitivity given that different analysis models may associate with distinct trade-offs between contrast-to-noise ratios and the effective degrees of freedom. As an example, we demonstrate that as TR shortens, the intrinsic differences in how noise is accommodated in general linear models and Pearson correlation analyses (assuming Gaussian distributed stochastic signals and noise) can result in quite different outcomes, either gaining or losing statistical power.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.