Abstract

Experiments in psychology often target hypothetical constructs to test some causal hypothesis or theory. In light of this goal, it is pertinent to use a manipulation that actually changes the focal hypothetical construct, and only that construct. In assessing whether such manipulation “success” can be assumed, researchers often include manipulation validity checks in their designs—a measure of the focal construct which should be responsive to the manipulation. One interpretation of a positive manipulation check is that it lends credence to a particular causal interpretation of a phenomenon. Scrutinizing the results of such manipulation checks supposedly enables a more stringent test of a causal hypothesis. This paper submits that manipulation checks do not improve our inferences to causal explanations, but may in practice result in weaker hypothesis tests. Rather than being useful, manipulation checks are at best uninformative, but more likely compromise the appraisal of a causal hypothesis. The second half of this paper advocates four methodological heuristics, offered as alternatives to manipulation validity checks, to more severely test causal hypotheses. The heuristics call for a burgeoning focus on (a) assessing the specificity of manipulations, (b) evaluating theoretical risk, (c) attempts to cast doubt on alternatives, and (d) appraising the relative merits of explanations. I conclude that rather than relying on manipulation checks as a ‘Band-Aid’ method to alleviate validity concerns, inferential rigor can be improved by virtue of these heuristics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call