Abstract

This paper reviews the changing strategies for both process and outcome evaluations of teen pregnancy prevention programs over the past few decades. Implementation evaluations have emphasized discovery of what program attributes are most effective in reducing teen pregnancy and its antecedents. Outcome evaluations have moved from collecting data to measure knowledge, attitudes, and program satisfaction to measuring behavior change including postponement of sexual involvement, increased used of contraception, or reduction in teen pregnancy. High quality randomized control trials or quasi-experimental designs are being increasingly emphasized, as are sophisticated analysis techniques using multi-variate analyses, controls for cluster sampling, and other strategies designed to build a more solid knowledge base about how to prevent early pregnancy.

Highlights

  • Over the past four decades one of the likely contributing factors to reduced rates of teen pregnancy in the United States has been the search for and discovery of programs that are effective in preventing this behavior

  • The purpose of this paper is to review the evolution of the evaluation of teen pregnancy programs from the late 1980’s to the present, examining both process and outcome evaluations

  • Evaluation of teen pregnancy prevention programs has come a long way in the past few decades

Read more

Summary

Introduction

Over the past four decades one of the likely contributing factors to reduced rates of teen pregnancy in the United States has been the search for and discovery of programs that are effective in preventing this behavior. There has been a search for the characteristics of effective programs. Evaluators have tried to learn which programs work best for various populations, and have documented the magnitude of program effects on early pregnancy or its Societies 2015, 5 antecedents. Organizations began to produce lists of effective programs, using various criteria. The publication stressed a movement away from weak or non-empirical evaluation criteria and the adoption of more rigorous standards: Credible lists were not based on process evaluation data (that is, they do not assess client or staff satisfaction with the program, whether the program was delivered as planned or attendance patterns); intuition about program effects; faith in a particular approach or method; political or religious inclination; or rhetoric about what should or might work

Objectives
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.