Abstract

We developed a reporting guideline to provide authors with guidance about what should be reported when writing a paper for publication in a scientific journal using a particular type of research design: the single-case experimental design. This report describes the methods used to develop the Single-Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016. As a result of 2 online surveys and a 2-day meeting of experts, the SCRIBE 2016 checklist was developed, which is a set of 26 items that authors need to address when writing about single-case research. This article complements the more detailed SCRIBE 2016 Explanation and Elaboration article (Tate et al., 2016) that provides a rationale for each of the items and examples of adequate reporting from the literature. Both these resources will assist authors to prepare reports of single-case research with clarity, completeness, accuracy, and transparency. They will also provide journal reviewers and editors with a practical checklist against which such reports may be critically evaluated. We recommend that the SCRIBE 2016 is used by authors preparing manuscripts describing single-case research for publication, as well as journal reviewers and editors who are evaluating such manuscripts. Scientific Abstract Reporting guidelines, such as the Consolidated Standards of Reporting Trials (CONSORT) Statement, improve the reporting of research in the medical literature (Turner et al., 2012). Many such guidelines exist and the CONSORT Extension to Nonpharmacological Trials (Boutron et al., 2008) provides suitable guidance for reporting between- groups intervention studies in the behavioral sciences. The CONSORT Extension for N-of-1 Trials (CENT 2015) was developed for multiple crossover trials with single individuals in the medical sciences (Shamseer et al., 2015; Vohra et al., 2015), but there is no reporting guideline in the CONSORT tradition for single-case research used in the behavioral sciences. We developed the Single-Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016 to meet this need. This Statement article describes the methodology of the development of the SCRIBE 2016, along with the outcome of 2 Delphi surveys and a consensus meeting of experts. We present the resulting 26-item SCRIBE 2016 checklist. The article complements the more detailed SCRIBE 2016 Explanation and Elaboration article (Tate et al., 2016) that provides a rationale for each of the items and examples of adequate reporting from the literature. Both these resources will assist authors to prepare reports of single-case research with clarity, completeness, accuracy, and transparency. They will also provide journal reviewers and editors with a practical checklist against which such reports may be critically evaluated.

Highlights

  • Scientific background Describe the scientific background to identify issue/s under analysis, current scientific knowledge, and gaps in that knowledge base

  • We developed the Single-Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016 to meet this need

  • Even though single-case experimental intervention research has comparable frequency to between-groups research in the aphasiology, education, psychology, and neurorehabilitation literature (Beeson & Robey, 2006; Perdices & Tate, 2009; Shadish & Sullivan, 2011), evidence of inadequate and incomplete reporting is documented in multiple surveys of this literature in different populations (Barker et al, 2013; Didden et al, 2006; Maggin et al, 2011; Smith, 2012; Tate et al, 2014). To address these issues we developed a reporting guideline, entitled the Single-Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016, to assist authors, journal reviewers and editors to improve the reporting of single-case research

Read more

Summary

Scientific Abstract

Reporting guidelines, such as the Consolidated Standards of Reporting Trials (CONSORT) Statement, improve the reporting of research in the medical literature (Turner et al, 2012). Within each class of design the adequacy of such controls and whether or not the degree of experimental control meets design standards (see Horner et al, 2005; Kratochwill et al, 2013) vary considerably (cf A-B-A vs A-B-A-B; multiple-baseline designs with two vs three baselines/tiers) Reports of these designs in the literature have variable scientific quality and features of internal and external validity can be evaluated with scales measuring scientific robustness in single-case designs, such as described in Maggin et al (2014) and Tate et al (2013b). The funds were used to employ the project manager, set up and develop a web-based survey, hold a consensus meeting, and sponsor participants to attend the consensus meeting

Methodology of the Delphi Process
Design
Findings
Conclusions
Declaration of Conflicting Interests
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call