Beginning in 2010, the US Department of Health and Human Services (HHS) funded more than 40 evaluations of adolescent pregnancy prevention interventions. The government's emphasis on rigor and transparency, along with a requirement that grantees collect standardized behavioral outcomes, ensured that findings could be meaningfully compared across evaluations. We used random and mixed-effects meta-analysis to analyze the findings generated by these evaluations to learn whether program elements, program implementation features, and participant demographics were associated with effects on adolescent sexual risk behavior. We screened all 43 independent evaluation reports, some of which included multiple studies, funded by HHS and completed before October 1, 2016. HHS released, and our team considered, all such studies regardless of favorability or statistical significance. Of these studies, we included those that used a randomized or high-quality quasi-experimental research design. We excluded studies that did not use statistical matching or provide pretest equivalence data on a measure of sexual behavior or a close proxy. We also excluded studies that compared 2 pregnancy prevention interventions without a control group. A total of 44 studies from 39 reports, comprising 51 150 youths, met the inclusion criteria. Two researchers extracted data from each study by using standard systematic reviewing and meta-analysis procedures. In addition, study authors provided individual participant data for a subset of 34 studies. We used mixed-effects meta-regressions with aggregate data to examine whether program or participant characteristics were associated with program effects on adolescent sexual risk behaviors and consequences. To examine whether individual-level participant characteristics such as age, gender, and race/ethnicity were associated with program effects, we used a 1-stage meta-regression approach combining participant-level data (48 635 youths) with aggregate data from the 10 studies for which participant-level data were not available. Across all 44 studies, we found small but statistically insignificant mean effects favoring the programs and little variability around those means. Only 2 program characteristics showed statistically reliable relationships with program effects. First, gender-specific (girl-only) programs yielded a statistically significant average effect size (P < .05). Second, programs with individualized service delivery were more effective than programs delivering services to youths in small groups (P < .05). We found no other statistically significant associations between program effects and program or participant characteristics, or evaluation methods. Nor was there a statistically significant difference in the mean effect sizes for programs with previous evidence of effectiveness and previously untested programs. Although several individual studies reported positive impacts, the average effects were small and there was minimal variation in effect sizes across studies on all of the outcomes assessed. Thus, we were unable to confidently identify which individual program characteristics were associated with effects. However, these studies examined relatively short-term effects and it is an open question whether some programs, perhaps with distinctive characteristics, will show longer-term effects as more of the adolescent participants become sexually active. Public Health Implications. The success of a small number of individualized interventions designed specifically for girls in changing behavioral outcomes suggests the need to reexamine the assumptions that underlie coed group approaches. However, given the almost total absence of similar programs targeting male adolescents, it is likely to be some time before evidence to support or reject such an approach for boys is available.
Read full abstract