Abstract

AbstractScholars contend that the reason for stasis in human rights measures is a biased measurement process, rather than stagnating human rights practices. We argue that bias may be introduced as part of the compilation of the human rights reports that serve as the foundation of human rights measures. An additional source of potential bias may be human coders, who translate human rights reports into human rights scores. We first test for biases via a machine-learning approach using natural language processing and find substantial evidence of bias in human rights scores. We then present findings of an experiment on the coders of human rights reports to assess whether potential changes in the coding procedures or interpretation of coding rules affect scores over time. We find no evidence of coder bias and conclude that human rights measures have changed over time and that bias is introduced as part of monitoring and reporting.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call