Abstract

QCA has recently been subject to massive criticism and although the substance of that criticism is not completely new, it differs from earlier critiques by invoking simulations for the evaluation of QCA. In addition to debates about the meaning of the simulation results, there is a more fundamental discussion about whether simulations promise any relevant insights in principle. Some voices in the QCA community reject simulations per se because they necessarily lack case knowledge. As a consequence, the debate is at an impasse on a metalevel because critics of QCA rely on simulations, the results of which some QCA proponents refuse to consider as insightful. This article addresses this impasse and presents six reasons why simulations must be considered appropriate for evaluating QCA. I show that if taken to its conclusion, the central argument against simulations undermines the need for running a truth table analysis in the first place. The way forward in this debate should not be about whether simulations are useful, but how to configure meaningful simulations evaluating QCA.

Highlights

  • Since its introduction in 1987, Qualitative Comparative Analysis (QCA) has received as much appraisal as it has been the target of criticism

  • Simulations can be at least partially based on empirical data or exclusively on hypothetical data and involve Monte Carlo Simulations or a single simulation (Krogslund et al 2015; Lucas and Szatrowski 2014)

  • One-shot simulations of hypothetical data do not suffer from these problems because we model the data-generating process, but they are vulnerable to generalizing about QCA in general based on one dataset

Read more

Summary

Introduction

Since its introduction in 1987, Qualitative Comparative Analysis (QCA) has received as much appraisal as it has been the target of criticism. Simulations of the consequences of overspecification on the validity of QCA solutions have been rejected with the argument that no case knowledge is involved in the analysis of hypothetical data, ignoring a constitutive feature of empirical QCA studies (Olsen 2014; Ragin 2014) This argument entails the claim that we can determine superfluous conditions in case studies prior to the truth table analysis and avoid the problem of overspecification. The arguments generalize to the application of the Coincidence Analysis algorithm (Baumgartner 2009) This algorithm follows a different protocol for determining redundant conjuncts than the Quine–McCluskey algorithm, but the former would be pointless to apply if case studies allowed us to identify all causally irrelevant conjuncts before the truth table analysis. What a robust result is depends on the research goal and might pertain to the question of whether a single conjunction is always part of the solution

Limited diversity and counterfactuals
Sampling
Missing data
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call