Abstract

Collaborative efforts like modern scientific research depend on methods to evaluate and absorb participants’ contributions, and at the research frontier this evaluative step is often accomplished through the peer review of grants and manuscripts. With billions of dollars and space in prestigious journals hinging on the decisions of reviewers, the review system has attracted consistent scrutiny. Many of the thousands of studies scrutinizing peer review focus on the reliability, validity, and fairness of the reviewers’ decisions. Largely absent in this debate about peer review’s internal practices are the consequences of these practices for the character and diffusion of published knowledge. This dissertation shifts the focus to the consequences of peer review practices through four case studies. The first case investigates the negotiation of revisions authors of quantitative sociological manuscripts undertake during peer review and reveals that substantial changes concern primarily manuscripts’ theoretical framing, while the data analyses remain relatively stable. The case argues that the greater relative value placed on data and analysis over frames incentivizes investment into the former over the latter. The second case interrogates the common practice of using post-publication citations to evaluate the validity of review decisions. Analysis of the reviews of manuscripts submitted to the American Sociological Review from 1977 to 1981 and the manuscripts’ subsequent citations reveals no relationship. However, reviewers’ comments show that reviewers focused on the soundness of the manuscripts’ arguments, not their potential impact. The case shows that a review process that results in publications of variable impact is not necessarily a failing of peer review, but rather a consequence of reviewers and citers draw on different dimensions of value. The third case study examines the consequences for quantitative sociology of the common bias for positive findings in peer review. Using hundreds of studies that use the General Social Survey, the published statistical relationships are perturbed by slight changes to the model specifications. Results show that at the time of publication, results are relatively robust to this perturbation. Additionally, the published relationships are estimated using waves of the Survey that appeared after publication. Results indicate that published findings are weakened much more by social change. The last case focuses on the consequences of scientific peer review judgments outside of the sphere of science. By measuring rates at which millions of scientific journals are used as sources in Wikipedia, the largest online encyclopedia, I show that Wikipedia editors preferentially use high impact and the more accessible (open access) journals. The case shows that increased accessibility of the scientific literature improves its diffusion to the lay public and that a status ordering that review practices establish in one sphere, science, may be exported wholesale to a disparate context, Wikipedia.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call