Abstract

Systematic reviews are increasingly using data from preclinical animal experiments in evidence networks. Further, there are ever-increasing efforts to automate aspects of the systematic review process. When assessing systematic bias and unit-of-analysis errors in preclinical experiments, it is critical to understand the study design elements employed by investigators. Such information can also inform prioritization of automation efforts that allow the identification of the most common issues. The aim of this study was to identify the design elements used by investigators in preclinical research in order to inform unique aspects of assessment of bias and error in preclinical research. Using 100 preclinical experiments each related to brain trauma and toxicology, we assessed design elements described by the investigators. We evaluated Methods and Materials sections of reports for descriptions of the following design elements: 1) use of comparison group, 2) unit of allocation of the interventions to study units, 3) arrangement of factors, 4) method of factor allocation to study units, 5) concealment of the factors during allocation and outcome assessment, 6) independence of study units, and 7) nature of factors. Many investigators reported using design elements that suggested the potential for unit-of-analysis errors, i.e., descriptions of repeated measurements of the outcome (94/200) and descriptions of potential for pseudo-replication (99/200). Use of complex factor arrangements was common, with 112 experiments using some form of factorial design (complete, incomplete or split-plot-like). In the toxicology dataset, 20 of the 100 experiments appeared to use a split-plot-like design, although no investigators used this term. The common use of repeated measures and factorial designs means understanding bias and error in preclinical experimental design might require greater expertise than simple parallel designs. Similarly, use of complex factor arrangements creates novel challenges for accurate automation of data extraction and bias and error assessment in preclinical experiments.

Highlights

  • RationaleSystematic reviews are increasingly incorporating data from preclinical animal experiments [1,2,3,4,5]

  • Accurate and efficient interpretation of the study design used in such experiments is an important component of that process, because a unique aspect of systematic reviews is the assessment of bias and errors in the study design, in addition to extraction of the effect sizes and effect size precision

  • A study described as an "individually randomized, 3 by 2 factorial design blocked by sex, with repeated measures and blinded outcome assessment" immediately reveals the design element options employed by the investigators

Read more

Summary

Introduction

RationaleSystematic reviews are increasingly incorporating data from preclinical animal experiments [1,2,3,4,5]. A study described as an "individually randomized, 3 by 2 factorial design blocked by sex, with repeated measures and blinded outcome assessment" immediately reveals the design element options employed by the investigators. It conveys that the investigators used design element options that relate to risk of systematic biases (randomized and blinded) and that have the potential to create unit-of-analysis errors (repeated measures). A unit-of-analysis error occurs when the unit of allocation of the intervention is different from the unit used in the statistical analysis. This description of the study ensures that the reviewer knows the results will likely contain an assessment of two main effects and an interaction (factorial design)

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call