Abstract

Social–emotional learning (SEL) programs are frequently evaluated using randomized controlled trial (RCT) methodology as a means to assess program impacts. What is often missing in RCT studies is a robust parallel investigation of the multi-level implementation of the program. The field of implementation science bridges the gap between the RCT framework and understanding program impacts through the systematic data collection of program implementation components (e.g., adherence, quality, responsiveness). Data collected for these purposes can be used to answer questions regarding program impacts that matter to policy makers and practitioners in the field (e.g., Will the program work in practice? Under what conditions? For whom and why?). As such, the primary goal of this paper is to highlight the importance of studying implementation in the context of education RCTs, by sharing one example of a conceptualization and related set of implementation measures we created for a current study of ours testing the impacts of a SEL program for preschool children. Specifically, we describe the process we used to develop an implementation conceptual framework that highlights the importance of studying implementation at two levels: (1) the program implementation supports for teachers, and (2) teacher implementation of the curriculum in the classroom with students. We then discuss how we can use such multi-level implementation data to extend our understanding of program impacts to answer questions such as: “Why did the program work (or not work) to produce impacts?”; “What are the core components of the program?”; and “How can we improve the program in future implementations?”

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call