ObjectivesWhile the presence of publication bias in clinical research is well documented, little is known about its role in the reporting of health services research. This paper explores stakeholder perceptions and experiences with regard to the role of publication and related biases in quantitative research relating to the quality, accessibility and organization of health services.MethodsWe present findings from semi-structured interviews with those responsible for the funding, publishing and/or conduct of quantitative health services research, primarily in the UK. Additional data collection includes interviews with health care decision makers as ‘end users’ of health services research, and a focus group with patient and service user representatives. The final sample comprised 24 interviews and eight focus group participants.ResultsMany study participants felt unable to say with any degree of certainty whether publication bias represents a significant problem in quantitative health services research. Participants drew broad contrasts between externally funded and peer reviewed research on the one hand, and end user funded quality improvement projects on the other, with the latter perceived as more vulnerable to selective publication and author over-claiming. Multiple study objectives, and a general acceptance of ‘mess and noise’ in the data and its interpretation was seen to reduce the importance attached to replicable estimates of effect sizes in health services research. The relative absence of external scrutiny, either from manufacturers of interventions or health system decision makers, added to this general sense of ‘low stakes’ of health services research. As a result, while many participants advocated study pre-registration and using protocols to pre-identify outcomes, others saw this as an unwarranted imposition.ConclusionsThis study finds that incentives towards publication and related bias are likely to be present, but not to the same degree as in clinical research. In health services research, these were seen as being offset by other forms of ‘novelty’ bias in the reporting and publishing of research findings.
Read full abstract