Abstract

AbstractCorrelation does not imply causation and psychologists' causal inference training often focuses on the conclusion that therefore experiments are needed—without much consideration for the causal inference frameworks used elsewhere. This leaves researchers ill‐equipped to solve inferential problems that they encounter in their work, leading to mistaken conclusions and incoherent statistical analyses. For a more systematic approach to causal inference, this article provides brief introductions to the potential outcomes framework—the “lingua franca” of causal inference—and to directed acyclic graphs, a graphical notation that makes it easier to systematically reason about complex causal situations. I then discuss two issues that may be of interest to researchers in social and personality psychology who think that formalized causal inference is of little relevance to their work. First, posttreatment bias: In various common scenarios (noncompliance, mediation analysis, missing data), researchers may analyze data from experimental studies in a manner that results in internally invalid conclusions, despite randomization. Second, tests of incremental validity: Routine practices in personality psychology suggest that they may be conducted for at least two different reasons (to demonstrate the non‐redundancy of new scales, to support causal conclusions) without being particularly suited for either purpose. Taking causal inference seriously is challenging; it reveals assumptions that may make many uncomfortable. However, ultimately it is a necessary step to ensure the validity of psychological research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call