Abstract

The doctrine of pseudoreplication (DP) offers specific advice on how to ensure statistical independence and compute F-ratios properly when testing a null hypothesis. Our target article showed that this advice can lead to problems in experimental design and analysis. Though a few commenters attempt to defend DP, none offered substantive evidence that our modeling results were incorrect. In our response, we further highlight the complications surrounding definitions of experimental units. In particular, we show that the definition of independence assumed in DP is inconsistent with independence as defined in probability theory. We show that interconnectedness across levels of analysis is pervasive, and that no simple set of rules or procedures can help experimenters avoid this problem. We argue that the relevance or interference of a particular level of analysis can be determined only after an experiment is done. In our view, analytical methods must be designed to match experiments, the opposite of the advice offered in DP. Finally, we emphasize the weakness of null testing and the inability of p values to predict whether a result will generalize or be replicated.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call