Abstract

Political scientists are increasingly using experiments to study the relationship between institutions and political and economic outcomes. Institutions are the “rules and procedures that structure social interaction by constraining and enabling actors’ behavior” (Helmke and Levitsky, 2006, 5). During more than two decades of renewed interest in institutions in political science, researchers have sought answers to broad questions, like: How do institutions affect outcomes such as growth and development, participation, accountability, and policy selection? Which institutions, and what elements of institutional design, matter for these outcomes? How do formal institutions interact with informal institutions? How can weak political institutions be strengthened? And, what are the causes of institutional change? The interest in using experiments to address such questions reflects an enduring concern with causal inference in the institutions literature (Frye, 2012). Early scholarly work encountered limited success in dealing with the identification issues that arise because the causes and effects of institutions are highly endogenous. This motivated a large empirical literature that used instruments to exploit exogenous variation and isolate the causal effects of institutions (Acemoglu, Johnson and Robinson, 2001). Yet, it is notoriously difficult to find instruments that meet the requirements for unbiased inference (Harrison and List, 2004). It is also far from clear whether instrumental variables approaches identify a quantity or population of theoretical interest (Deaton, 2010). Moreover, many of these studies employed cross-national data and composite indices of institutional quality (such as Polity IV and Freedom House), which induce measurement problems and compromise the ability to identify the effects of any single institution (Pande and Udry, 2006). Randomized experiments offer one of the most promising approaches available to addressing the causal inference problem in research on institutions. One defining feature of experiments—whether field, lab, or survey—is that the researcher randomly assigns units in the target population to a ‘treatment’ group, which receives a particular intervention, and a ‘control’ group that does not, or that receives a different version of the intervention.1 With enough units, random assignment achieves, in expectation, balance across the groups on all pre-treatment covariates, making the control group a suitable counterfactual for the treatment group. Thus, differences in outcomes across the groups can be attributed solely to

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.