Abstract

ObjectivesTo evaluate the completeness of reporting in a sample of abstracts on diagnostic accuracy studies before and after the release of Standards for Reporting of Diagnostic Accuracy Studies (STARD) for abstracts in 2017. MethodsWe included 278 diagnostic accuracy abstracts published in 2012 (N = 138) and 2019 (N = 140) and indexed in EMBASE. We analyzed their adherence to 10 items of the 11-item STARD for abstracts checklist, and we explored variability in reporting across abstract characteristics using multivariable Poisson modeling. ResultsMost of the 278 abstracts (75%) were published in discipline-specific journals, with a median impact factor of 2.9 (IQR: 1.9–3.7). The majority (41%) of abstracts reported on imaging tests. Overall, a mean of 5.4/10 (SD: 1.4) STARD for abstracts items was reported (range: 1.2–9.7). Items reported in less than one-third of abstracts included ‘eligible patient demographics’ (24%), ‘setting of recruitment’ (30%), ‘method of enrollment’ (18%), ‘estimates of precision for accuracy measures’ (26%), and ‘protocol registration details’ (4%). We observed substantial variability in reporting across several abstract characteristics, with higher adherence associated with the use of a structured abstract, no journal limit for abstract word count, abstract word count above the median, one-gate enrollment design, and prospective data collection. There was no evidence of increase in the number of reported items between 2012 and 2019 (5.2 vs 5.5 items; adjusted reporting ratio: 1.04 [95% CI: 0.98–1.10]). ConclusionThis sample of diagnostic accuracy abstracts revealed suboptimal reporting practices without improvement between 2012 and 2019. The test evaluation field could benefit from targeted knowledge translation strategies to improve completeness of reporting in abstracts.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call