Abstract

This paper uses newly available data from Web of Science on publications matched to researchers in Survey of Doctorate Recipients to compare the quality of scientific publication data collected by surveys versus algorithmic approaches. We illustrate the different types of measurement errors in self-reported and machine-generated data by estimating how publication measures from the two approaches are related to career outcomes (e.g., salaries and faculty rankings). We find that the potential biases in the self-reports are smaller relative to the algorithmic data. Moreover, the errors in the two approaches are quite intuitive: the measurement errors in algorithmic data are mainly due to the accuracy of matching, which primarily depends on the frequency of names and the data that was available to make matches, while the noise in self reports increases over the career as researchers’ publication records become more complex, harder to recall, and less immediately relevant for career progress. At a methodological level, we show how the approaches can be evaluated using accepted statistical methods without gold standard data. We also provide guidance on how to use the new linked data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.