Abstract

Abstract Economics papers increasingly report balance, pre-trend, placebo, and other “sniff tests,” rejection of which is bad news for authors, undermining the credibility of their main results. We derive nonparametric bounds on the latent proportion of significant sniff tests removed by the publication process (whether by p-hacking or relegation to the file drawer) and the proportion whose significance was due to true misspecification, not bad luck. Using a hand-collected sample of nearly 30,000 sniff tests, we estimate a removal rate of over 30% for balance tests in randomized controlled trials and a misspecification rate of over 40% for other tests.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call