Abstract

BackgroundThe inflation of falsely rejected hypotheses associated with multiple hypothesis testing is seen as a threat to the knowledge base in the scientific literature. One of the most recently developed statistical constructs to deal with this problem is the false discovery rate (FDR), which aims to control the proportion of the falsely rejected null hypotheses among those that are rejected. FDR has been applied to a variety of problems, especially for the analysis of 3-D brain images in the field of Neuroimaging, where the predominant form of statistical inference involves the more conventional control of false positives, through Gaussian random field theory (RFT). In this study we considered FDR and RFT as alternative methods for handling multiple testing in the analysis of 1-D continuum data. The field of biomechanics has recently adopted RFT, but to our knowledge FDR has not previously been used to analyze 1-D biomechanical data, nor has there been a consideration of how FDR vs. RFT can affect biomechanical interpretations.MethodsWe reanalyzed a variety of publicly available experimental datasets to understand the characteristics which contribute to the convergence and divergence of RFT and FDR results. We also ran a variety of numerical simulations involving smooth, random Gaussian 1-D data, with and without true signal, to provide complementary explanations for the experimental results.ResultsOur results suggest that RFT and FDR thresholds (the critical test statistic value used to judge statistical significance) were qualitatively identical for many experimental datasets, but were highly dissimilar for others, involving non-trivial changes in data interpretation. Simulation results clarified that RFT and FDR thresholds converge as the true signal weakens and diverge when the signal is broad in terms of the proportion of the continuum size it occupies. Results also showed that, while sample size affected the relation between RFT and FDR results for small sample sizes (<15), this relation was stable for larger sample sizes, wherein only the nature of the true signal was important.DiscussionRFT and FDR thresholds are both computationally efficient because both are parametric, but only FDR has the ability to adapt to the signal features of particular datasets, wherein the threshold lowers with signal strength for a gain in sensitivity. Additional advantages and limitations of these two techniques as discussed further. This article is accompanied by freely available software for implementing FDR analyses involving 1-D data and scripts to replicate our results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call