Abstract

The Comparative Manifesto Project (CMP) provides the only time series of estimated party policy positions in political science and has been extensively used in a wide variety of applications. Recent work (e.g., Benoit, Laver, and Mikhaylov 2009; Klingemann et al. 2006) focuses on nonsystematic sources of error in these estimates that arise from the text generation process. Our concern here, by contrast, is with error that arises during the text coding process since nearly all manifestos are coded only once by a single coder. First, we discuss reliability and misclassification in the context of hand-coded content analysis methods. Second, we report results of a coding experiment that used trained human coders to code sample manifestos provided by the CMP, allowing us to estimate the reliability of both coders and coding categories. Third, we compare our test codings to the published CMP “gold standard” codings of the test documents to assess accuracy and produce empirical estimates of a misclassification matrix for each coding category. Finally, we demonstrate the effect of coding misclassification on the CMP's most widely used index, its left-right scale. Our findings indicate that misclassification is a serious and systemic problem with the current CMP data set and coding process, suggesting the CMP scheme should be significantly simplified to address reliability issues.

Highlights

  • The Comparative Manifesto Project (CMP) provides the only time series of estimated party policy positions in political science and has been extensively used in a wide variety of applications

  • To assess a typical and well-known project that relies on the human coding of political texts, we examine the dominant professional source of human expert text codings of party policy positions in political analysis: the long-standing Comparative Manifestos Project (CMP) (Budge et al 2001; Klingemann et al 2006)

  • Our examination of coder disagreement using experimental recoding of core CMP documents clearly indicates that the CMP coding process is highly prone to misclassification and stochastic coding errors

Read more

Summary

Introduction

The Comparative Manifesto Project (CMP) provides the only time series of estimated party policy positions in political science and has been extensively used in a wide variety of applications. Texts are parsed into units of analysis relevant to the research question, such as words, sentences, or quasi-sentences Following this first step of unitization, each unit is coded by assigning it to a category from some coding scheme that is a core component of the text analysis project. To assess a typical and well-known project that relies on the human coding of political texts, we examine the dominant professional source of human expert text codings of party policy positions in political analysis: the long-standing Comparative Manifestos Project (CMP) (Budge et al 2001; Klingemann et al 2006). Our findings have direct implications for future use of the CMP scheme and illustrate how reliability can be tested for any research design that incorporates human coding

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call