Abstract

Experimental research has received major new attention in the social sciences recently. This phenomenon has crossed disciplinary lines; economics has seen a surge of field experiments, particularly in the development subfield (Duflo et al. 2008), as well as laboratory experiments related to studies of decision-making with roots in game theory and behavioral economics (Camerer 2003). In political science, experiments have gone from a marginalized method to a major source of insight in studies of political communication (Druckman and Leeper 2012), ethnic politics (Wong 2005; Dunning and Nilekani 2013), conflict studies (McDermott et al. 2002), political mobilization (Green et al. 2013), clientelistic politics (Vicente and Wantchekon 2009; Gonzalez-Ocantos et al . 2012; De La O 2013), and more. This growth stands alongside well-established experimental traditions in psychology as well as related subfields in sociology. Furthermore, institutional developments such as the Time-Sharing Experiments in the Social Sciences program have made experimental research more accessible to political science and sociology researchers without the resources to run their own laboratories. This emergence of experiments as a major tool for social science research raises issues for multi-method research design. Experimental designs strive to reduce the number of assumptions needed to justify causal inference. Do experiments still benefit from multi-method designs incorporating case-study research? If so, how? This chapter argues that multi-method designs combining qualitative and experimental methods are unusually strong. While experiments depend on a different and narrower set of assumptions in comparison with regression-type designs, they still require assumptions about measurement, causal interconnections among cases, and experimental realism. Furthermore, while evidence regarding causal pathways between the treatment and the outcome is not required to make a causal inference using an experiment, it can vastly increase the social scientific value of experimental results, and qualitative research can contribute substantially to this objective. Experiments and the Potential-Outcomes Framework Experiments are important because they are a particularly strong tool for causal inference. Indeed, experiments are the paradigm of causal inference under the potential-outcomes framework. As explained in more detail in Chapter 2, the reason is that random assignment in combination with the law of large numbers makes the treatment and control group (or, in some experiments, the various treatment groups) credible counterfactuals for each other.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call