Abstract

Little is known about the extent and nature of publication bias in economic evaluations. Our objective was to determine whether economic evaluations are subject to publication bias by considering whether economic data are as likely to be reported, and reported as promptly, as effectiveness data. Trials that intended to conduct an economic analysis and ended before 2008 were identified in the International Standard Randomised Controlled Trial Number (ISRCTN) register; a random sample of 100 trials was retrieved. Fifty comparator trials were randomly drawn from those not identified as intending to conduct an economic study. The trial start and end dates, estimated sample size and funder type were extracted. For trials planning economic evaluations, effectiveness and economic publications were sought; publication dates and journal impact factors were extracted. Effectiveness abstracts were assessed for whether they reached a firm conclusion that one intervention was most effective. Primary investigators were contacted about reasons for non-publication of results, or reasons for differential publication strategies for effectiveness and economic results. Trials planning an economic study were more likely to be funded by government (p=0.01) and larger (p=0.003) than other trials. The trials planning an economic evaluation had a mean of 6.5 (range 2.7-13.2) years since the trial end in which to publish their results. Effectiveness results were reported by 70 %, while only 43 % published economic evaluations (p<0.001). Reasons for non-publication of economic results included the intervention being ineffective, and staffing issues. Funding source, time since trial end and length of study were not associated with a higher probability of publishing the economic evaluation. However, studies that were small or of unknown size were significantly less likely to publish economic evaluations than large studies (p<0.001). The authors' confidence in labelling one intervention clearly most effective did not affect the probability of publication. The mean time to publication was 0.7years longer for cost-effectiveness data than for effectiveness data where both were published (p=0.001). The median journal impact factor was 1.6 points higher for effectiveness publications than for the corresponding economic publications (p=0.01). Reasons for publishing in different journals included editorial decision making and the additional time that economic evaluation takes to conduct. Trials that intend to conduct an economic analysis are less likely to report economic data than effectiveness data. Where economic results do appear, they are published later, and in journals with lower impact factors. These results suggest that economic output may be more susceptible than effectiveness data to publication bias. Funders, grant reviewers and trialists themselves should ensure economic evaluations are prioritized and adequately staffed to avoid potential problems with bias.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.