Abstract

Concern over democratic erosion has led to a proliferation of proposed interventions to strengthen democratic attitudes in the United States. Resource constraints, however, prevent implementing all proposed interventions. One approach to identify promising interventions entails leveraging domain experts, who have knowledge regarding a given field, to forecast the effectiveness of candidate interventions. We recruit experts who develop general knowledge about a social problem (academics), experts who directly intervene on the problem (practitioners), and nonexperts from the public to forecast the effectiveness of interventions to reduce partisan animosity, support for undemocratic practices, and support for partisan violence. Comparing 14,076 forecasts submitted by 1,181 forecasters against the results of a megaexperiment (n = 32,059) that tested 75 hypothesized effects of interventions, we find that both types of experts outperformed members of the public, though experts differed in how they were accurate. While academics' predictions were more specific (i.e., they identified a larger proportion of ineffective interventions and had fewer false-positive forecasts), practitioners' predictions were more sensitive (i.e., they identified a larger proportion of effective interventions and had fewer false-negative forecasts). Consistent with this, practitioners were better at predicting best-performing interventions, while academics were superior in predicting which interventions performed worst. Our paper highlights the importance of differentiating types of experts and types of accuracy. We conclude by discussing factors that affect whether sensitive or specific forecasters are preferable, such as the relative cost of false positives and negatives and the expected rate of intervention success.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call