Abstract

Laboratory workflows and preclinical models have become increasingly diverse and complex. Confronted with the dilemma of a multitude of information with ambiguous relevance for their specific experiments, scientists run the risk of overlooking critical factors that can influence the planning, conduct and results of studies and that should have been considered a priori. To address this problem, we developed “PEERS” (Platform for the Exchange of Experimental Research Standards), an open-access online platform that is built to aid scientists in determining which experimental factors and variables are most likely to affect the outcome of a specific test, model or assay and therefore ought to be considered during the design, execution and reporting stages. The PEERS database is categorized into in vivo and in vitro experiments and provides lists of factors derived from scientific literature that have been deemed critical for experimentation. The platform is based on a structured and transparent system for rating the strength of evidence related to each identified factor and its relevance for a specific method/model. In this context, the rating procedure will not solely be limited to the PEERS working group but will also allow for a community-based grading of evidence. We here describe a working prototype using the Open Field paradigm in rodents and present the selection of factors specific to each experimental setup and the rating system. PEERS not only offers users the possibility to search for information to facilitate experimental rigor, but also draws on the engagement of the scientific community to actively expand the information contained within the platform. Collectively, by helping scientists search for specific factors relevant to their experiments, and to share experimental knowledge in a standardized manner, PEERS will serve as a collaborative exchange and analysis tool to enhance data validity and robustness as well as the reproducibility of preclinical research. PEERS offers a vetted, independent tool by which to judge the quality of information available on a certain test or model, identifies knowledge gaps and provides guidance on the key methodological considerations that should be prioritized to ensure that preclinical research is conducted to the highest standards and best practice.

Highlights

  • Biomedical research, in the preclinical sphere, has been subject to scrutiny for the low levels of reproducibility that continue to persist across laboratories (Ioannidis, 2005)

  • Reproducibility checks are common in fields like physics (CERN, 2018), but rarer in biological disciplines such as neuroscience and pharmacotherapy, which are increasingly facing a “reproducibility crisis” (Bespalov et al, 2016; Bespalov and Steckler, 2018; Botvinik-Nezer et al, 2020)

  • To mitigate some of the above issues, we have developed PEERS, an open-access online platform that seeks to aid scientists in determining which experimental factors most likely affect the outcome of a specific test, model or assay and deserve consideration prior to study design, execution and reporting

Read more

Summary

INTRODUCTION

Biomedical research, in the preclinical sphere, has been subject to scrutiny for the low levels of reproducibility that continue to persist across laboratories (Ioannidis, 2005). We have gone one step further by providing a grading of the strength of this evidence so that examination of a specific factor in the database provides the user with an extracted summary of all relevant papers and their scores from one or more assessors (scorecards) This required the development of a generic “checklist” to determine the quality of each paper the details of which are described . Checklist for Grading of Evidence/Publications - The Evaluation Concurrent with the identification of experimental factors and the review of literature, novel detailed “scorecards” to evaluate the quality of scientific evidence were refined through multiple Delphi rounds within the PEERS Working Group These scorecards contain a checklist with two main domains: Methods and Results. Each reference is evaluated by two or more assessors to remove any source of bias, and an average score of all scorecards is presented, along with the calculated standard

Study design
Statistical methods
Result
DATA AVAILABILITY STATEMENT
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call