Abstract

Impact evaluations draw their data from two sources, namely, surveys conducted for the evaluation or administrative data collected for other purposes. Both types of data have been used in impact evaluations of social programs. This study analyzes the causes of differences in impact estimates when survey data and administrative data are used to evaluate earnings impacts in social experiments and discusses the differences observed in eight evaluations of social experiments that used both survey and administrative data. There are important trade-offs between the two data sources. Administrative data are less expensive but may not cover all income and may not cover the time period desired, while surveys can be designed to avoid these problems. We note that errors can be due to nonresponse or reporting, and errors can be balanced between the treatment and the control groups or unbalanced. We find that earnings are usually higher in survey data than in administrative data due to differences in coverage and likely overreporting of overtime hours and pay in survey data. Evaluations using survey data usually find greater impacts, sometimes much greater. The much lower cost of administrative data make their use attractive, but they are still subject to underreporting and other problems. We recommend further evaluations using both types of data with investigative audits to better understand the sources and magnitudes of errors in both survey and administrative data so that appropriate corrections to the data can be made.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call