Abstract

ABSTRACT We developed and tested the Bayesian multiple comparison correction method for Bayesian voxelwise second-level fMRI analysis with R. The performance of the developed method was tested with simulation and real image datasets. First, we compared false alarm and hit rates, which were used as proxies for selectivity and sensitivity, respectively, between Bayesian and classical inference methods. For the comparison, we created simulated images, added noise to the created images, and analyzed the noise-added images while applying Bayesian and classical multiple comparison correction methods. Second, we analyzed five real image datasets to examine how our Bayesian method worked in realistic settings. When the performance assessment was conducted, the Bayesian correction method demonstrated good sensitivity (hit rate ≥ 75%) and acceptable selectivity (false alarm rate < 10%) when N ≥ 8. Furthermore, the Bayesian correction method showed better sensitivity compared with the classical correction method while maintaining the aforementioned acceptable selectivity.

Highlights

  • We aimed at developing and testing a novel method for multiple comparison correction in second-level fMRI analysis based on Bayesian statistics

  • Given that assuring a desirable level of statistical power while controlling false positives is a significant issue in fMRI research (Lieberman & Cunningham, 2009), we intended to address the aforementioned issues in the present study

  • We examined the performance of the proposed Bayesian multiple comparison correction by conducting second-level analysis with the simulation and real image datasets that were explained in the materials section

Read more

Summary

Introduction

We aimed at developing and testing a novel method for multiple comparison correction in second-level (group) fMRI analysis based on Bayesian statistics. Given that usual fMRI analysis involves testing of more than ten to hundred thousand voxels, the possibility to encounter Type I error is likely to be significantly inflated once we adopt a widely used p-value threshold, p < .05, without any further treatment. To address this issue, researchers have developed various statistical methods (e.g., voxelwise and clusterwise familywise error rate correction and false-discovery rate correction) to adjust the aforementioned p-value to adopt a more stringent threshold or to adjust the rate of potential false positives.

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.