Systematic reviews and meta-analyses are viewed as potent tools for generalized causal inference. These reviews are routinely used to inform decision makers about expected effects of interventions. However, the logic of generalization from research reviews to diverse policy and practice contexts is not well developed. Building on sampling theory, concerns about epistemic uncertainty, and principles of generalized causal inference, this article presents a pragmatic approach to generalizability assessment for use with systematic reviews and meta-analyses. This approach is applied to two systematic reviews and meta-analyses of effects of "evidence-based" psychosocial interventions for youth and families. Evaluations included in systematic reviews are not necessarily representative of populations and treatments of interest. Generalizability of results is limited by high risks of bias, uncertain estimates, and insufficient descriptive data from impact evaluations. Systematic reviews and meta-analyses can be used to test generalizability claims, explore heterogeneity, and identify potential moderators of effects. These reviews can also produce pooled estimates that are not representative of any larger sets of studies, programs, or people. Further work is needed to improve the conduct and reporting of impact evaluations and systematic reviews, and to develop practical approaches to generalizability assessment and guide applications of interventions in diverse policy and practice contexts.
Read full abstract