Single-case experimental designs (SCEDs) are used to study the effects of interventions on the behavior of individual cases, by making comparisons between repeated measurements of an outcome under different conditions. In research areas where SCEDs are prevalent, there is a need for methods to synthesize results across multiple studies. One approach to synthesis uses a multilevel meta-analysis (MLMA) model to describe the distribution of effect sizes across studies and across cases within studies. However, MLMA relies on having accurate sampling variances of effect size estimates for each case, which may not be possible due to auto-correlation in the raw data series. One possible solution is to combine MLMA with robust variance estimation (RVE), which provides valid assessments of uncertainty even if the sampling variances of effect size estimates are inaccurate. Another possible solution is to forgo MLMA and use simpler, ordinary least squares (OLS) methods with RVE. This study evaluates the performance of effect size estimators and methods of synthesizing SCEDs in the presence of auto-correlation, for several different effect size metrics, via a Monte Carlo simulation designed to emulate the features of real data series. Results demonstrate that the MLMA model with RVE performs properly in terms of bias, accuracy, and confidence interval coverage for estimating overall average log response ratios. The OLS estimator corrected with RVE performs the best in estimating overall average Tau effect sizes. None of the available methods perform adequately for meta-analysis of within-case standardized mean differences. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Read full abstract