Abstract
A significant share of education research uses data collected by “enumerators.” It is well-documented that “enumerator effects”—or inconsistent practices among the people who administer measurement tools—can be a key source of error in survey data collection. However, it is less understood whether this is a problem for academic assessments. We leverage a remote phone-based mathematics assessment of primary school students and survey of their parents in Kenya. Enumerators, who were teachers in our partner's network, were randomized to students to study the presence of enumerator effects. We find that both the academic assessment and survey were prone to enumerator effects and use simulation to show that these effects were large enough to lead to spurious results at a troubling rate in impact evaluations. We therefore recommend assessment administrators randomize enumerators at the student level, orthogonal to categories being compared (e.g., treatment/control groups), and train enumerators to minimize bias.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have