Abstract

Single-case experimental designs (SCEDs) are a group of methodologies of growing interest, aiming to test the effectiveness of an intervention at the single-participant level, using a rigorous and prospective methodology. SCEDs may promote flexibility on how we design research protocols and inform clinical decision-making, especially for personalized outcome measures, inclusion of families with challenging needs, measurement of children's progress in relation to parental implementation of interventions, and focus on personal goals. Design options for SCEDs are discussed in relation to an expected on/off effect of the intervention (e.g. school/environmental adaptation, assistive technology devices) or, alternatively, on an expected carry-on/maintenance of effects (interventions aiming to develop or restore a function). Randomization in multiple-baseline designs and 'power' calculations are explained. The most frequent reasons for not detecting an intervention effect in SCEDs are also presented, especially in relation to baseline length, trend, and instability. The use of SCEDs on the front and back ends of randomized controlled trials is discussed. WHAT THIS PAPER ADDS: Single-case experimental designs (SCEDs) may promote flexibility on how we design research protocols. Randomization in multiple-baseline designs allows 'power' calculations based on randomization tests. Whenever feasible, N-of-1 trials should be preferred to other SCEDs and to group randomized controlled trials.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call